The following properties can be nested inside the top-level configs.
Quota
Routing
Query
Segments Config
Table Index Config
Field Config List
Specify the columns and the type of indices to be created on those columns. Currently, only Text search columns can be specified using this property. We will be migrating the rest of the indices to this field in future releases.
Warning:
If removing the forwardIndexDisabled property above to regenerate the forward index for multi-value (MV) columns note that the following invariants cannot be maintained after regenerating the forward index for a forward index disabled column:
Ordering guarantees of the MV values within a row
If entries within an MV row are duplicated, the duplicates will be lost. Please regenerate the segments via your offline jobs and re-push / refresh the data to get back the original MV data with duplicates.
We will work on removing the second invariant in the future.
Realtime Table Config
We will now discuss the sections that are only applicable to realtime tables.
segmentsConfig
Indexing config
Below is the list of fields in streamConfigs section.
IndexingConfig -> streamConfig has been deprecated starting 0.7.0 or commit 9eaea9. Use IngestionConfig -> StreamIngestionConfig -> streamConfigMaps instead.
When specifying realtime.segment.flush.threshold.rows, the actual number of rows per segment is computed using the following formula:
``
realtime.segment.flush.threshold.rows / partitionsConsumedByServer
This means that if we set realtime.segment.flush.threshold.rows=1000 and each server consumes 10 partitions, the rows per segment will be:1000/10 = 100
The desired segment size refers to the size of the segments that are loaded in Pinot Servers. Normally compressed version of the segments with tar.gz format are kept in the deep store which has smaller size than the specified parameter.
Any additional properties set here will be directly available to the stream consumers. For example, in case of Kafka stream, you could put any of the configs described in Kafka configuration page, and it will be automatically passed to the KafkaConsumer.
Some of the properties you might want to set:
Example
Here is a minimal example of what the streamConfigs section may look like:
Pinot allows users to define environment variables in the format of ${ENV_NAME} or ${ENV_NAME:DEFAULT_VALUE}as field values in table config.
Pinot instance will override it during runtime.
Brackets are required when defining the environment variable."$ENV_NAME"is not supported.
Environment variables used without default value in table config have to be available to all Pinot components - Controller, Broker, Server, and Minion. Otherwise, querying/consumption will be affected depending on the service to which these variables are not available.
Below is an example of setting AWS credential as part of table config using environment variable.
Here's an example table config for a realtime table. All the fields from the offline table config are valid for the realtime table. Additionally, realtime tables use some extra fields.