Stream ingestion
Apache Pinot allows user to consume data from streams and push it directly to pinot database. This process is known as Stream Ingestion. Stream Ingestion allows user to query data within seconds of publishing.
Stream Ingestion provides support for checkpoints out of the box for preventing data loss.
Stream ingestion requires the following steps -
Create schema configuration
Create table configuration
Upload table and schema spec
Let's take a look at each of the following steps in a bit more detail. Let us assume the data to be ingested is in the following format -
Create Schema Configuration
Schema defines the fields along with their data types which are available in the datasource. Schema also defines the fields which serve as dimensions
, metrics
and timestamp
respectively.
Follow creating a schema for more details on schema configuration. For our sample data, the schema configuration should look as follows
Create Table Configuration
The next step is to create a table where all the ingested data will flow and can be queried. Unlike batch ingestion, table configuration for realtime ingestion also triggers the data ingestion job.For a more detailed overview about tables, check out the table reference.
The realtime table configuration consists of the the following fields -
tableName - The name of the table where the data should flow
tableType - The internal type for the table. Should always be set to
REALTIME
for realtime ingestionsegmentsConfig -
tableIndexConfig - defines which column to use for indexing along with the type of index. You can refer [Indexing Configs] for full configuration. It consists of the following required fields -
loadMode - specifies how the segments should be loaded. Should be one of
heap
ormmap
. Here's the difference between both the configsstreamConfig - specifies the datasource along with the necessary configs to start consuming the realtime data. The streamConfig can be thought of as equivalent of job spec in case of batch ingestion. The following options are supported in this config -
Config key | Description | Supported values |
streamType | the streaming platform from which to consume the data |
|
stream.[streamType].consumer.type | whether to use per partition low-level consumer or high-level stream consumer |
|
stream.[streamType].topic.name | the datasource (e.g. topic, data stream) from which to consume the data | String |
stream.[streamType].decoder.class.name | name of the class to be used for parsing the data. The class should implement | String |
stream.[streamType].consumer.factory.class.name | name of the factory class to be used to provide the appropriate implementation of low level and high level consumer as well as the metadata | String |
stream.[streamType].consumer.prop.auto.offset.reset | determines the offset from which to start the ingestion |
|
realtime.segment.flush.threshold.time | Time threshold that will keep the realtime segment open for before we complete the segment | |
realtime.segment.flush.threshold.size | Row count flush threshold for realtime segments. This behaves in a similar way for HLC and LLC. For HLC, since there is only one consumer per server, this size is used as the size of the consumption buffer and determines after how many rows we flush to disk. For example, if this threshold is set to two million rows, then a high level consumer would have a buffer size of two million. If this value is set to 0, then the consumers adjust the number of rows consumed by a partition such that the size of the completed segment is the desired size (unless threshold.time is reached first) | |
realtime.segment.flush.desired.size | The desired size of a completed realtime segment.This config is used only if |
You can also specify additional configs for the consumer by prefixing the key with stream.[streamType] where streamType
is the name of the streaming platform. For our sample data and schema, the table config will look like -
Upload schema and table config
Now that we have our table and schema configurations, let's upload them to the Pinot cluster. As soon as the configs are uploaded, pinot will start ingesting available records from the topic.
Custom Ingestion Support
We are working on adding more integrations such as Kinesis out of the box. You can easily write your on ingestion plugin in case it is not supported out of the box. Follow Stream Ingestion Plugin for a walkthrough.
Last updated