Table
Top-level fields
Property | Description |
---|---|
tableName | Specifies the name of the table. Should only contain alpha-numeric characters, hyphens (‘-‘), or underscores (‘_’). (Using a double-underscore (‘__’) is not allowed and reserved for other features within Pinot) |
tableType | Defines the table type - |
isDimTable | |
quota | |
task | |
routing | |
query | |
segmentsConfig | |
tableIndexConfig | |
fieldConfigList | |
tenants | |
ingestionConfig | |
upsertConfig | |
dedupConfig | |
tierConfigs | |
metadata | This section is for keeping custom configs, which are expressed as key-value pairs. |
Second level fields
The following properties can be nested inside the top-level configs.
Quota
Property | Description |
---|---|
storage | The maximum storage space the table is allowed to use, before replication. For example, in the above table, the storage is 140G and replication is 3. Therefore, the maximum storage the table is allowed to use is 140*3=420G. The space used by the table is calculated by adding up the sizes of all segments from every server hosting this table. Once this limit is reached, offline segment push throws a |
maxQueriesPerSecond | The maximum queries per second allowed to execute on this table. If query volume exceeds this, a |
Routing
Property | Description |
---|---|
segmentPrunerTypes | The list of segment pruners to be enabled. |
instanceSelectorType |
Query
Property | Description |
---|---|
timeoutMs | Query timeout in milliseconds |
Segments Config
Property | Description |
---|---|
schemaName | Name of the schema associated with the table |
timeColumnName | The name of the time column for this table. This must match with the time column name in the schema. This is mandatory for tables with push type |
replication | Number of replicas for the tables. A replication value of 1 means segments won't be replicated across servers. |
retentionTimeUnit | Unit for the retention. e.g. HOURS, DAYS. This in combination with retentionTimeValue decides the duration for which to retain the segments e.g. |
retentionTimeValue | A numeric value for the retention. This in combination with retentionTimeUnit decides the duration for which to retain the segments |
segmentPushType (Deprecated starting 0.7.0 or commit 9eaea9. Use IngestionConfig -> BatchIngestionConfig -> segmentPushType ) | This can be either
|
segmentPushFrequency (Deprecated starting 0.7.0 or commit 9eaea9. Use IngestionConfig -> BatchIngestionConfig -> segmentPushFrequency ) | The cadence at which segments are pushed eg. |
Table Index Config
Property | Description |
---|---|
invertedIndexColumns | The list of columns that inverted index should be created on. The name of columns should match the schema. e.g. in the table above, inverted index has been created on 3 columns |
createInvertedIndexDuringSegmentGeneration | Boolean to indicate whether to create inverted indexes during the segment creation. By default, false i.e. inverted indexes are created when the segments are loaded on the server |
sortedColumn | The column which is sorted in the data and hence will have a sorted index. This does not need to be specified for the offline table, as the segment generation job will automatically detect the sorted column in the data and create a sorted index for it. |
bloomFilterColumns | |
bloomFilterConfigs | |
rangeIndexColumns | The list of columns that range index should be created on. Typically used for numeric columns and mostly on metrics. e.g. |
rangeIndexVersion | Version of the range index, 2 (latest) by default. |
starTreeIndexConfigs | |
enableDefaultStarTree | |
enableDynamicStarTreeCreation | Boolean to indicate whether to allow creating star-tree when server loads the segment. Star-tree creation could potentially consume a lot of system resources, so this config should be enabled when the servers have the free system resources to create the star-tree. |
noDictionaryColumns | |
onHeapDictionaryColumns | The list of columns for which the dictionary should be created on heap |
varLengthDictionaryColumns | The list of columns for which the variable length dictionary needs to be enabled in offline segments. This is only valid for string and bytes columns and has no impact for columns of other data types. |
jsonIndexColumns | |
jsonIndexConfigs | |
segmentPartitionConfig | The map from column to partition function, which indicates how the segment is partitioned. Currently 4 types of partition functions are supported:
Example:
|
loadMode | Indicates how the segments will be loaded onto the server
|
columnMinMaxValueGeneratorMode | Generate min max values for columns. Supported values are
|
nullHandlingEnabled | Boolean to indicate whether to keep track of null values as part of the segment generation. This is required when using |
aggregateMetrics | |
optimizeDictionaryForMetrics | Set to |
noDictionarySizeRatioThreshold | If |
segmentNameGeneratorType | Type of segmentNameGenerator, default is |
Field Config List
Specify the columns and the type of indices to be created on those columns. Currently, only Text search columns can be specified using this property. We will be migrating the rest of the indices to this field in future releases.
Property | |
---|---|
name | name of the column |
encodingType | Should be one of |
indexType | index to create on this column. currently only |
properties | JSON of key-value pairs containing additional properties associated with the index. The following properties are supported currently -
|
Warning:
If removing the forwardIndexDisabled
property above to regenerate the forward index for multi-value (MV) columns note that the following invariants cannot be maintained after regenerating the forward index for a forward index disabled column:
Ordering guarantees of the MV values within a row
If entries within an MV row are duplicated, the duplicates will be lost. Please regenerate the segments via your offline jobs and re-push / refresh the data to get back the original MV data with duplicates.
We will work on removing the second invariant in the future.
Realtime Table Config
We will now discuss the sections that are only applicable to realtime tables.
segmentsConfig
Property | Description |
---|---|
replicasPerPartition | The number of replicas per partition for the stream |
completionMode | determines if segment should be downloaded from other server or built in memory. can be |
peerSegmentDownloadScheme | protocol to use to download segments from server. can be on of |
Indexing config
Below is the list of fields in streamConfigs
section.
IndexingConfig -> streamConfig has been deprecated starting 0.7.0 or commit 9eaea9. Use IngestionConfig -> StreamIngestionConfig -> streamConfigMaps instead.
Property | Description |
---|---|
streamType | only |
stream.[streamType].consumer.type | |
stream.[streamType].topic.name | topic or equivalent datasource from which to consume data |
stream[streamType].consumer.prop.auto.offset.reset | offset to start consuming data from. Should be one of |
(0.6.0 onwards) realtime.segment.flush.threshold.rows (0.5.0 and prior) (deprecated) | The maximum number of rows to consume before persisting the consuming segment. Default is 5,000,000 |
realtime.segment.flush.threshold.time | Maximum elapsed time after which a consuming segment should be persisted.
The value can be set as a human readable string, such as |
(0.6.0 onwards) realtime.segment.flush.threshold.segment.size (0.5.0 and prior) (deprecated)
| Desired size of the completed segments. This value can be set as a human readable string such as |
realtime.segment.flush.autotune.initialRows | Initial number of rows for learning. This value is used only if Default is |
When specifying realtime.segment.flush.threshold.rows
, the actual number of rows per segment is computed using the following formula:
``
realtime.segment.flush.threshold.rows / partitionsConsumedByServer
This means that if we set realtime.segment.flush.threshold.rows=1000
and each server consumes 10 partitions, the rows per segment will be:1000/10 = 100
Any additional properties set here will be directly available to the stream consumers. For example, in case of Kafka stream, you could put any of the configs described in Kafka configuraton page, and it will be automatically passed to the KafkaConsumer.
Some of the properties you might want to set:
Config | Description | Values |
---|---|---|
auto.offset.reset | If Kafka Consumer encounters an offset which is not in range (resulting in Kafka OffsetOutOfRange), the strategy to use to reset the offset Default value is latest, as a result of which, if the consumer seeks for an offset which has already expired, the consumer will reset to latest, resulting in data loss. | earliest - reset to earliest available offset latest - reset to latest available offset. |
Example
Here is a minimal example of what the streamConfigs
section may look like:
0.6.0 onwards:
0.5.0 and prior:
Tenants
Property | Description |
---|---|
broker | Broker tenant in which the segment should reside |
server | Server tenant in which the segment should reside |
tagOverrideConfig | Override the tenant for segment if it fulfills certain conditions. Currently, only support override on |
Example
Environment Variables Override
Pinot allows users to define environment variables in the format of ${ENV_NAME}
or ${ENV_NAME:DEFAULT_VALUE}
as field values in table config.
Pinot instance will override it during runtime.
Brackets are required when defining the environment variable."$ENV_NAME"
is not supported.
Environment variables used without default value in table config have to be available to all Pinot components - Controller, Broker, Server, and Minion. Otherwise, querying/consumption will be affected depending on the service to which these variables are not available.
Below is an example of setting AWS credential as part of table config using environment variable.
Example:
Sample Configurations
Offline Table
Realtime Table
Here's an example table config for a realtime table. All the fields from the offline table config are valid for the realtime table. Additionally, realtime tables use some extra fields.
Last updated