Data flow using Flume

The entire Flume agent runs in a JVM process, which includes all the components (source, channel, and sink). The Flume source receives events from the external sources, like a web server, external files, and so on. The source pushes events to the channel, which stores it until picked up by the sink. The channel stores the payload (message stream) in either the local filesystem or in a memory, depending on the type of the source. For example, if the source is a file, the payload is stored locally. The sink picks up the payload from the channel and pushes it to external data stores. The source and sink within the agent run asynchronously. Sometimes, it may be possible for the sink to push the payload to yet another Flume ...

Get Modern Big Data Processing with Hadoop now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.