The ingestion configuration ('ingestionConfig') is a section of the table configuration that specifies how to ingest streaming data into Pinot.


Config key



See the streamConfigMaps section for details.


Set to true to skip any row indexing error and move on to the next row. Otherwise, an error evaluating a transform or filter function may block ingestion (real-time or offline), and result in data loss or corruption. Consider your use case to determine if it's preferable to set this option to false, and fail the ingestion if an error occurs to maintain data integrity.


Set to true to validate the time column values ingested during segment upload. Validates each row of data in a segment matches the specified time format, and falls within a valid time range (1971-2071). If the value doesn't meet both criteria, Pinot replaces the value with null. This option ensures that the time values are strictly increasing and that there are no duplicates or gaps in the data.


Set to true to validate the time range of the segment falls between 1971 and 2071. This option ensures data segments stored in the system are correct and consistent.


Config key


Supported values


The streaming platform to ingest data from



Whether to use per partition low-level consumer or high-level stream consumer

- lowLevel: Consume data from each partition with offset management.

- highLevel: Consume data without control over the partitions.


Topic or data source to ingest data from



List of brokers


Name of class to parse the data. The class should implement the interface.

String. Available options: - org.apache.pinot.plugin.inputformat.json.JSONMessageDecoder - org.apache.pinot.plugin.inputformat.avro.KafkaAvroMessageDecoder - org.apache.pinot.plugin.inputformat.avro.SimpleAvroMessageDecoder - org.apache.pinot.plugin.inputformat.avro.confluent.KafkaConfluentSchemaRegistryAvroMessageDecoder - org.apache.pinot.plugin.inputformat.csv.CSVMessageDecoder - org.apache.pinot.plugin.inputformat.protobuf.ProtoBufMessageDecoder - org.apache.pinot.plugin.inputformat.protobuf.KafkaConfluentSchemaRegistryProtoBufMessageDecoder


Name of factory class to provide the appropriate implementation of low-level and high-level consumer, as well as the metadata

String. Available options: - - - -


Determines the offset from which to start the ingestion

- smallest - largest - timestamp in milliseconds


Specifies the data format to ingest via a stream. The value of this property should match the format of the data in the stream.



Maximum elapsed time after which a consuming segment persist. Note that this time should be smaller than the Kafka retention period configured for the corresponding topic.

String, such 1d or 4h30m. Default is 6h (six hours).


The maximum number of rows to consume before persisting the consuming segment. If this value is set to 0, the configuration looks to realtime.segment.flush.threshold.segment.size below. See note below this table for more information.

Default is 5,000,000


The maximum number of rows to consume before persisting the consuming segment. Added since release-1.2.0. See note below this table for more information.



Size the completed segments should be. This value is used when realtime.segment.flush.threshold.rows is set to 0.

String, such as 150M or 1.1G., etc. Default is 200M (200 megabytes). You can also specify additional configurations for the consumer directly into streamConfigMaps. For example, for Kafka streams, add any of the configs described in Kafka configuration page to pass them directly to the Kafka consumer.

The number of rows per segment is computed using the following formula: realtime.segment.flush.threshold.rows / maxPartitionsConsumedByServer For example, if you set realtime.segment.flush.threshold.rows = 1000 and each server consumes 10 partitions, the rows per segment is 1000/10 = 100.

Since release-1.2.0, we introduced realtime.segment.flush.threshold.segment.rows, which is directly used as the number of rows per segment.

Take the above example, if you set realtime.segment.flush.threshold.segment.rows = 1000 and each server consumes 10 partitions, the rows per segment is 1000.

Example table config with ingestionConfig

  "tableName": "transcript",
  "tableType": "REALTIME",
  "segmentsConfig": {
    "timeColumnName": "timestamp",
    "timeType": "MILLISECONDS",
    "schemaName": "transcript",
    "replicasPerPartition": "1"
  "tenants": {},
  "tableIndexConfig": {
    "loadMode": "MMAP",
  "metadata": {
    "customConfigs": {}
  "ingestionConfig": {
    "streamIngestionConfig": {
        "streamConfigMaps": [
            "stream.kafka.decoder.prop.format": "JSON",
            "key.serializer": "org.apache.kafka.common.serialization.ByteArraySerializer",
            "": "",
            "streamType": "kafka",
            "value.serializer": "org.apache.kafka.common.serialization.ByteArraySerializer",
            "stream.kafka.consumer.type": "LOWLEVEL",
            "": "localhost:9876",
            "realtime.segment.flush.threshold.segment.rows": "500000",
            "realtime.segment.flush.threshold.time": "3600000",
            "": "",
            "": "smallest",
            "": "transcript-topic"
      "transformConfigs": [],
      "continueOnError": true,
      "rowTimeValueCheck": true,
      "segmentTimeValueCheck": false
    "isDimTable": false

Last updated