Explore the Schema component in Apache Pinot, vital for defining the structure and data types of Pinot tables, enabling efficient data processing and analysis.

Each table in Pinot is associated with a schema. A schema defines:

  • Fields in the table with their data types.

  • Whether the table uses column-based or table-based null handling. For more information, see Null value support.

The schema is stored in Zookeeper along with the table configuration.

Schema naming in Pinot follows typical database table naming conventions, such as starting names with a letter, not ending with an underscore, and using only alphanumeric characters


A schema also defines what category a column belongs to. Columns in a Pinot table can be categorized into three categories:



Dimension columns are typically used in slice and dice operations for answering business queries. Some operations for which dimension columns are used:

  • GROUP BY - group by one or more dimension columns along with aggregations on one or more metric columns

  • Filter clauses such as WHERE


These columns represent the quantitative data of the table. Such columns are used for aggregation. In data warehouse terminology, these can also be referred to as fact or measure columns.

Some operation for which metric columns are used:

  • Aggregation - SUM, MIN, MAX, COUNT, AVG etc

  • Filter clause such as WHERE


This column represents time columns in the data. There can be multiple time columns in a table, but only one of them can be treated as primary. The primary time column is the one that is present in the segment config. The primary time column is used by Pinot to maintain the time boundary between offline and real-time data in a hybrid table and for retention management. A primary time column is mandatory if the table's push type is APPEND and optional if the push type is REFRESH .

Common operations that can be done on time column:


  • Filter clauses such as WHERE

Pinot does not enforce strict rules on which of these categories columns belong to, rather the categories can be thought of as hints to Pinot to do internal optimizations.

For example, metrics may be stored without a dictionary and can have a different default null value.

The categories are also relevant when doing segment merge and rollups. Pinot uses the dimension and time fields to identify records against which to apply merge/rollups.

Metrics aggregation is another example where Pinot uses dimensions and time are used as the key, and automatically aggregates values for the metric columns.

For configuration details, see Schema configuration reference.

Date and time fields

Since Pinot doesn't have a dedicated DATETIME datatype support, you need to input time in either STRING, LONG, or INT format. However, Pinot needs to convert the date into an understandable format such as epoch timestamp to do operations. You can refer to DateTime field spec configs for more details on supported formats.

Creating a schema

First, Make sure your cluster is up and running.

Let's create a schema and put it in a JSON file. For this example, we have created a schema for flight data.

For more details on constructing a schema file, see the Schema configuration reference.

  "schemaName": "flights",
  "enableColumnBasedNullHandling": true,
  "dimensionFieldSpecs": [
      "name": "flightNumber",
      "dataType": "LONG",
      "notNull": true
      "name": "tags",
      "dataType": "STRING",
      "singleValueField": false,
      "defaultNullValue": "null"
  "metricFieldSpecs": [
      "name": "price",
      "dataType": "DOUBLE",
      "notNull": true,
      "defaultNullValue": 0
  "dateTimeFieldSpecs": [
      "name": "millisSinceEpoch",
      "dataType": "LONG",
      "format": "EPOCH",
      "granularity": "15:MINUTES"
      "name": "hoursSinceEpoch",
      "dataType": "INT",
      "notNull": true,
      "format": "EPOCH|HOURS",
      "granularity": "1:HOURS"
      "name": "dateString",
      "dataType": "STRING",
      "format": "SIMPLE_DATE_FORMAT|yyyy-MM-dd",
      "granularity": "1:DAYS"

Then, we can upload the sample schema provided above using either a Bash command or REST API call.

bin/ AddSchema -schemaFile flights-schema.json -exec


bin/ AddTable -schemaFile flights-schema.json -tableFile flights-table.json -exec

Check out the schema in the Rest API to make sure it was successfully uploaded

Last updated