Table

The tables below shows the properties available to set at the table level.

Top-level fields

Second-level fields

The following properties can be nested inside the top-level configurations.

Quota

Routing

Find details on configuring routing here.

Query

Segments config

Table index config

This section is used to specify some general index configuration and multi-column indexes like Star-tree.

Before Pinot version 0.13, the configuration described above was also used to configure certain single-column indexes. While this approach is still supported, it is highly recommended to specify these indexes in the [Field Config List](#Field Config List) section instead. The documentation page for each index type provides guidance on how to utilize this section to create that specific index type. This updated method offers more flexibility and aligns with best practices for configuring single-column indexes in Pinot.

Field Config List

Specify the columns and the type of indices to be created on those columns.

Deprecated configuration options

There are several deprecated configuration options in Pinot that are still supported but recommended for migration to newer ways of configuration. Here's a summary of these options:

indexTypes

indexType

  • Description: Similar to indexTypes, but only supports a single index type as a string.

  • Note: If both indexTypes and indexType are present, the latter is ignored.

compressionCodec

  • Description: An older way to specify compression for indexes.

  • Recommendation: It's now recommended to specify compression in the forward index config.

Deprecated properties

  • Description: Before Pinot 0.13, certain indexes were configured using properties within this section.

  • Migration: Since Pinot 0.13, each index can be configured in a type-safe manner within its dedicated section in the indexes object. The documentation for each index type lists the properties that were previously used.

  • Notable Properties:

    • Text Index Properties: enableQueryCacheForTextIndex (used to enable/disable the cache, with values specified as strings, e.g., "true" or "false").

    • Forward Index Properties: rawIndexWriterVersion, deriveNumDocsPerChunkForRawIndex, forwardIndexDisabled.

It's strongly recommended to migrate from these deprecated options to the new, more structured configuration methods introduced in Pinot 0.13 for better maintainability and compatibility.

Warning:

If removing the forwardIndexDisabled property above to regenerate the forward index for multi-value (MV) columns note that the following invariants cannot be maintained after regenerating the forward index for a forward index disabled column:

  • Ordering guarantees of the MV values within a row

  • If entries within an MV row are duplicated, the duplicates will be lost. Regenerate the segments via your offline jobs and re-push / refresh the data to get back the original MV data with duplicates.

We will work on removing the second invariant in the future.

Real-time table config

The sections below apply to real-time tables only.

segmentsConfig

Indexing config

The streamConfigs section has been deprecated as of release 0.7.0. See streamConfigMaps instead.

Tenants

Example

  "broker": "brokerTenantName",
  "server": "serverTenantName",
  "tagOverrideConfig" : {
    "realtimeConsuming" : "serverTenantName_REALTIME"
    "realtimeCompleted" : "serverTenantName_OFFLINE"
  }
}

Environment variables override

Pinot allows users to define environment variables in the format of ${ENV_NAME} or ${ENV_NAME:DEFAULT_VALUE}as field values in table config.

Pinot instance will override it during runtime.

Brackets are required when defining the environment variable."$ENV_NAME"is not supported.

Environment variables used without default value in table config have to be available to all Pinot components - Controller, Broker, Server, and Minion. Otherwise, querying/consumption will be affected depending on the service to which these variables are not available.

Below is an example of setting AWS credential as part of table config using environment variable.

Example:

{
...
  "ingestionConfig": {
    "batchIngestionConfig": {
      "segmentIngestionType": "APPEND",
      "segmentIngestionFrequency": "DAILY",
      "batchConfigMaps": [
        {
          "inputDirURI": "s3://<my-bucket>/baseballStats/rawdata",
          "includeFileNamePattern": "glob:**/*.csv",
          "excludeFileNamePattern": "glob:**/*.tmp",
          "inputFormat": "csv",
          "outputDirURI": "s3://<my-bucket>/baseballStats/segments",
          "input.fs.className": "org.apache.pinot.plugin.filesystem.S3PinotFS",
          "input.fs.prop.region": "us-west-2",
          "input.fs.prop.accessKey": "${AWS_ACCESS_KEY}",
          "input.fs.prop.secretKey": "${AWS_SECRET_KEY}",
          "push.mode": "tar"
        }
      ],
      "segmentNameSpec": {},
      "pushSpec": {}
    }
  },
...
}

Sample configurations

Offline table

pinot-table-offline.json
"OFFLINE": {
  "tableName": "pinotTable",
  "tableType": "OFFLINE",
  "quota": {
    "maxQueriesPerSecond": 300,
    "storage": "140G"
  },
  "routing": {
    "segmentPrunerTypes": ["partition"],
    "instanceSelectorType": "replicaGroup"
  },
  "segmentsConfig": {
    "timeColumnName": "daysSinceEpoch",
    "timeType": "DAYS",
    "replication": "3",
    "retentionTimeUnit": "DAYS",
    "retentionTimeValue": "365",
    "segmentPushFrequency": "DAILY",
    "segmentPushType": "APPEND"
  },
  "tableIndexConfig": {
    "invertedIndexColumns": ["foo", "bar", "moo"],
    "createInvertedIndexDuringSegmentGeneration": false,
    "sortedColumn": ["pk"],
    "bloomFilterColumns": [],
    "starTreeIndexConfigs": [],
    "noDictionaryColumns": [],
    "rangeIndexColumns": [],
    "onHeapDictionaryColumns": [],
    "varLengthDictionaryColumns": [],
    "segmentPartitionConfig": {
      "columnPartitionMap": {
        "column_foo": {
          "functionName": "Murmur",
          "numPartitions": 32
        }
      }
    },
    "loadMode": "MMAP",
    "columnMinMaxValueGeneratorMode": null,
    "nullHandlingEnabled": false
  },
  "ingestionConfig": {
    "filterConfig": {
      "filterFunction": "Groovy({foo == \"VALUE1\"}, foo)"
    },
    "transformConfigs": [
    {
      "columnName": "bar",
      "transformFunction": "lower(moo)"
    },
    {
      "columnName": "hoursSinceEpoch",
      "transformFunction": "toEpochHours(millis)"
    }]
  },
  "tenants": {
    "broker": "myBrokerTenant",
    "server": "myServerTenant"
  },
  "metadata": {
    "customConfigs": {
      "key": "value",
      "key": "value"
    }
  }
}

Real-time table

Here's an example table config for a real-time table. All the fields from the offline table config are valid for the real-time table. Additionally, real-time tables use some extra fields.

pinot-table-realtime.json
"REALTIME": {
  "tableName": "pinotTable",
  "tableType": "REALTIME",
  "segmentsConfig": {
    "timeColumnName": "daysSinceEpoch",
    "timeType": "DAYS",
    "replication": "3",
    "retentionTimeUnit": "DAYS",
    "retentionTimeValue": "5",
    "completionConfig": {
      "completionMode": "DOWNLOAD"
    }
  },
  "tableIndexConfig": {
    "invertedIndexColumns": ["foo", "bar", "moo"],
    "sortedColumn": ["column1"],
    "noDictionaryColumns": ["metric1", "metric2"],
    "loadMode": "MMAP",
    "nullHandlingEnabled": false
  },
  "ingestionConfig:" {
    "streamIngestionConfig": {
      "streamConfigMaps": [
      {
        "realtime.segment.flush.threshold.rows": "0",
        "realtime.segment.flush.threshold.time": "24h",
        "realtime.segment.flush.threshold.segment.size": "150M",
        "stream.kafka.broker.list": "XXXX",
        "stream.kafka.consumer.factory.class.name": "XXXX",
        "stream.kafka.consumer.prop.auto.offset.reset": "largest",
        "stream.kafka.consumer.type": "XXXX",
        "stream.kafka.decoder.class.name": "XXXX",
        "stream.kafka.decoder.prop.schema.registry.rest.url": "XXXX",
        "stream.kafka.decoder.prop.schema.registry.schema.name": "XXXX",
        "stream.kafka.hlc.zk.connect.string": "XXXX",
        "stream.kafka.topic.name": "XXXX",
        "stream.kafka.zk.broker.url": "XXXX",
        "streamType": "kafka"
      }]
    }
  },
  "tenants":{
    "broker": "myBrokerTenant",
    "server": "myServerTenant",
    "tagOverrideConfig": {}
  },
  "metadata": {}
}