Apache Pinot allows user to consume data from streams and push it directly to pinot database. This process is known as Stream Ingestion. Stream Ingestion allows user to query data within seconds of publishing.
Stream Ingestion provides support for checkpoints out of the box for preventing data loss.
Stream ingestion requires the following steps -
Create schema configuration
Create table configuration
Upload table and schema spec
Let's take a look at each of the following steps in a bit more detail. Let us assume the data to be ingested is in the following format -
Schema defines the fields along with their data types which are available in the datasource. Schema also defines the fields which serve as dimensions , metrics and timestamp respectively.
Follow creating a schema for more details on schema configuration. For our sample data, the schema configuration should look as follows
The next step is to create a table where all the ingested data will flow and can be queried. Unlike batch ingestion, table configuration for realtime ingestion also triggers the data ingestion job.For a more detailed overview about tables, check out the table reference.
The realtime table configuration consists of the the following fields -
tableName - The name of the table where the data should flow
tableType - The internal type for the table. Should always be set to REALTIME for realtime ingestion
segmentsConfig -
tableIndexConfig - defines which column to use for indexing along with the type of index. You can refer [Indexing Configs] for full configuration. It consists of the following required fields -
loadMode - specifies how the segments should be loaded. Should be one of heap or mmap. Here's the difference between both the configs
heap: Segments are loaded on direct-memory. Note, 'heap' here is a legacy misnomer, and it does not
imply JVM heap. This mode should only be used when we want faster performance than memory-mapped files,
and are also sure that we will never run into OOM.
mmap: Segments are loaded on memory-mapped file. This is the default mode.
streamConfig - specifies the datasource along with the necessary configs to start consuming the realtime data. The streamConfig can be thought of as equivalent of job spec in case of batch ingestion. The following options are supported in this config -
You can also specify additional configs for the consumer by prefixing the key with stream.[streamType] where streamType is the name of the streaming platform. For our sample data and schema, the table config will look like -
Now that we have our table and schema configurations, let's upload them to the Pinot cluster. As soon as the configs are uploaded, pinot will start ingesting available records from the topic.
We are working on adding more integrations such as Kinesis out of the box. You can easily write your on ingestion plugin in case it is not supported out of the box. Follow Stream Ingestion Plugin for a walkthrough.
determines the offset from which to start the ingestion
smallestlargest or timestamp in milliseconds
realtime.segment.flush.threshold.time
Time threshold that will keep the realtime segment open for before we complete the segment
realtime.segment.flush.threshold.size
Row count flush threshold for realtime segments. This behaves in a similar way for HLC and LLC. For HLC,
since there is only one consumer per server, this size is used as the size of the consumption buffer and determines after how many rows we flush to disk. For example, if this threshold is set to two million rows,
then a high level consumer would have a buffer size of two million.
If this value is set to 0, then the consumers adjust the number of rows consumed by a partition such that the size of the completed segment is the desired size (unless
threshold.time is reached first)
realtime.segment.flush.desired.size
The desired size of a completed realtime segment.This config is used only if threshold.size is set to 0.