Segment
Pinot has the concept of a table, which is a logical abstraction to refer to a collection of related data. Pinot has a distributed architecture and scales horizontally. Pinot expects the size of a table to grow infinitely over time. In order to achieve this, the entire data needs to be distributed across multiple nodes.
Pinot achieves this by breaking the data into smaller chunks known as segments (similar to shards/partitions in relational databases). Segments can be seen as time-based partitions.
Thus, a segment is a horizontal shard representing a chunk of table data with some number of rows. The segment stores data for all columns of the table. Each segment packs the data in a columnar fashion, along with the dictionaries and indices for the columns. The segment is laid out in a columnar format so that it can be directly mapped into memory for serving queries.
Columns can be single or multi-valued and the following types are supported: STRING, BOOLEAN, INT, LONG, FLOAT, DOUBLE, TIMESTAMP or BYTES. Only single-valued BIG_DECIMAL data type is supported.
Columns may be declared to be metric or dimension (or specifically as a time dimension) in the schema. Columns can have default null values. For example, the default null value of a integer column can be 0. The default value for bytes columns must be hex-encoded before it's added to the schema.
Pinot uses dictionary encoding to store values as a dictionary ID. Columns may be configured to be “no-dictionary” column in which case raw values are stored. Dictionary IDs are encoded using minimum number of bits for efficient storage (e.g. a column with a cardinality of 3 will use only 2 bits for each dictionary ID).
A forward index is built for each column and compressed for efficient memory use. In addition, you can optionally configure inverted indices for any set of columns. Inverted indices take up more storage, but improve query performance. Specialized indexes like Star-Tree index are also supported. For more details, see Indexing.

Creating a segment

Once the table is configured, we can load some data. Loading data involves generating pinot segments from raw data and pushing them to the pinot cluster. Data can be loaded in batch mode or streaming mode. For more details, see the ingestion overview page.

Load Data in Batch

Prerequisites

Below are instructions to generate and push segments to Pinot via standalone scripts. For a production setup, you should use frameworks such as Hadoop or Spark. For more details on setting up data ingestion jobs, see Import Data.

Job Spec YAML

To generate a segment, we need to first create a job spec YAML file. This file contains all the information regarding data format, input data location, and pinot cluster coordinates. Note that this assumes that the controller is RUNNING to fetch the table config and schema. If not, you will have to configure the spec to point at their location. For full configurations, see Ingestion Job Spec.
job-spec.yml
1
executionFrameworkSpec:
2
name: 'standalone'
3
segmentGenerationJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner'
4
segmentTarPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentTarPushJobRunner'
5
segmentUriPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentUriPushJobRunner'
6
7
jobType: SegmentCreationAndTarPush
8
inputDirURI: 'examples/batch/baseballStats/rawdata'
9
includeFileNamePattern: 'glob:**/*.csv'
10
excludeFileNamePattern: 'glob:**/*.tmp'
11
outputDirURI: 'examples/batch/baseballStats/segments'
12
overwriteOutput: true
13
14
pinotFSSpecs:
15
- scheme: file
16
className: org.apache.pinot.spi.filesystem.LocalPinotFS
17
18
recordReaderSpec:
19
dataFormat: 'csv'
20
className: 'org.apache.pinot.plugin.inputformat.csv.CSVRecordReader'
21
configClassName: 'org.apache.pinot.plugin.inputformat.csv.CSVRecordReaderConfig'
22
configs:
23
24
tableSpec:
25
tableName: 'baseballStats'
26
schemaURI: 'http://localhost:9000/tables/baseballStats/schema'
27
tableConfigURI: 'http://localhost:9000/tables/baseballStats'
28
29
segmentNameGeneratorSpec:
30
31
pinotClusterSpecs:
32
- controllerURI: 'http://localhost:9000'
33
34
pushJobSpec:
35
pushParallelism: 2
36
pushAttempts: 2
37
pushRetryIntervalMillis: 1000
Copied!

Create and push segment

To create and push the segment in one go, use
Docker
Using launcher scripts
1
docker run \
2
--network=pinot-demo \
3
--name pinot-data-ingestion-job \
4
${PINOT_IMAGE} LaunchDataIngestionJob \
5
-jobSpecFile examples/docker/ingestion-job-specs/airlineStats.yaml
Copied!
Sample Console Output
1
SegmentGenerationJobSpec:
2
!!org.apache.pinot.spi.ingestion.batch.spec.SegmentGenerationJobSpec
3
excludeFileNamePattern: null
4
executionFrameworkSpec: {extraConfigs: null, name: standalone, segmentGenerationJobRunnerClassName: org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner,
5
segmentTarPushJobRunnerClassName: org.apache.pinot.plugin.ingestion.batch.standalone.SegmentTarPushJobRunner,
6
segmentUriPushJobRunnerClassName: org.apache.pinot.plugin.ingestion.batch.standalone.SegmentUriPushJobRunner}
7
includeFileNamePattern: glob:**/*.avro
8
inputDirURI: examples/batch/airlineStats/rawdata
9
jobType: SegmentCreationAndTarPush
10
outputDirURI: examples/batch/airlineStats/segments
11
overwriteOutput: true
12
pinotClusterSpecs:
13
- {controllerURI: 'http://pinot-controller:9000'}
14
pinotFSSpecs:
15
- {className: org.apache.pinot.spi.filesystem.LocalPinotFS, configs: null, scheme: file}
16
pushJobSpec: {pushAttempts: 2, pushParallelism: 1, pushRetryIntervalMillis: 1000,
17
segmentUriPrefix: null, segmentUriSuffix: null}
18
recordReaderSpec: {className: org.apache.pinot.plugin.inputformat.avro.AvroRecordReader,
19
configClassName: null, configs: null, dataFormat: avro}
20
segmentNameGeneratorSpec: null
21
tableSpec: {schemaURI: 'http://pinot-controller:9000/tables/airlineStats/schema',
22
tableConfigURI: 'http://pinot-controller:9000/tables/airlineStats', tableName: airlineStats}
23
24
Trying to create instance for class org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner
25
Initializing PinotFS for scheme file, classname org.apache.pinot.spi.filesystem.LocalPinotFS
26
Finished building StatsCollector!
27
Collected stats for 403 documents
28
Created dictionary for INT column: FlightNum with cardinality: 386, range: 14 to 7389
29
Using fixed bytes value dictionary for column: Origin, size: 294
30
Created dictionary for STRING column: Origin with cardinality: 98, max length in bytes: 3, range: ABQ to VPS
31
Created dictionary for INT column: Quarter with cardinality: 1, range: 1 to 1
32
Created dictionary for INT column: LateAircraftDelay with cardinality: 50, range: -2147483648 to 303
33
......
34
......
35
Pushing segment: airlineStats_OFFLINE_16085_16085_29 to location: http://pinot-controller:9000 for table airlineStats
36
Sending request: http://pinot-controller:9000/v2/segments?tableName=airlineStats to controller: a413b0013806, version: Unknown
37
Response for pushing table airlineStats segment airlineStats_OFFLINE_16085_16085_29 to location http://pinot-controller:9000 - 200: {"status":"Successfully uploaded segment: airlineStats_OFFLINE_16085_16085_29 of table: airlineStats"}
38
Pushing segment: airlineStats_OFFLINE_16084_16084_30 to location: http://pinot-controller:9000 for table airlineStats
39
Sending request: http://pinot-controller:9000/v2/segments?tableName=airlineStats to controller: a413b0013806, version: Unknown
40
Response for pushing table airlineStats segment airlineStats_OFFLINE_16084_16084_30 to location http://pinot-controller:9000 - 200: {"status":"Successfully uploaded segment: airlineStats_OFFLINE_16084_16084_30 of table: airlineStats"}
Copied!
1
bin/pinot-admin.sh LaunchDataIngestionJob \
2
-jobSpecFile examples/batch/airlineStats/ingestionJobSpec.yaml
Copied!
Alternately, you can separately create and then push, by changing the jobType to SegmentCreation or SegmenTarPush.

Templating Ingestion Job Spec

The Ingestion job spec supports templating with Groovy Syntax.
This is convenient if you want to generate one ingestion job template file and schedule it on a daily basis with extra parameters updated daily.
e.g. you could set inputDirURI with parameters to indicate the date, so that the ingestion job only processes the data for a particular date. Below is an example that templates the date for input and output directories.
1
inputDirURI: 'examples/batch/airlineStats/rawdata/${year}/${month}/${day}'
2
outputDirURI: 'examples/batch/airlineStats/segments/${year}/${month}/${day}'
Copied!
You can pass in arguments containing values for ${year}, ${month}, ${day} when kicking off the ingestion job: -values $param=value1 $param2=value2...
Docker
1
docker run \
2
--network=pinot-demo \
3
--name pinot-data-ingestion-job \
4
${PINOT_IMAGE} LaunchDataIngestionJob \
5
-jobSpecFile examples/docker/ingestion-job-specs/airlineStats.yaml
6
-values year=2014 month=01 day=03
Copied!
This ingestion job only generates segments for date 2014-01-03

Load Data in Streaming

Prerequisites
Below is an example of how to publish sample data to your stream. As soon as data is available to the realtime stream, it starts getting consumed by the realtime servers

Kafka

Docker
Using launcher scripts
Run below command to stream JSON data into Kafka topic: flights-realtime
1
docker run \
2
--network pinot-demo \
3
--name=loading-airlineStats-data-to-kafka \
4
${PINOT_IMAGE} StreamAvroIntoKafka \
5
-avroFile examples/stream/airlineStats/sample_data/airlineStats_data.avro \
6
-kafkaTopic flights-realtime -kafkaBrokerList kafka:9092 -zkAddress pinot-zookeeper:2181/kafka
Copied!
Run below command to stream JSON data into Kafka topic: flights-realtime
1
bin/pinot-admin.sh StreamAvroIntoKafka \
2
-avroFile examples/stream/airlineStats/sample_data/airlineStats_data.avro \
3
-kafkaTopic flights-realtime -kafkaBrokerList localhost:19092 -zkAddress localhost:2191/kafka
Copied!
Last modified 9d ago