Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Brokers are the components that handle Pinot queries. They accept queries from clients and forward them to the right servers. They collect results back from the servers and consolidate them into a single response, to sent it back to the client.
Pinot Brokers are modeled as Spectators. They need to know the location of each segment of a table (and each replica of the segments) and route requests to the appropriate server that hosts the segments of the table being queried. The broker ensures that all the rows of the table are queried exactly once so as to return correct, consistent results for a query. The brokers may optimize to prune some of the segments as long as accuracy is not sacrificed. Helix provides the framework by which spectators can learn the location in which each partition of a resource (i.e. participant) resides. The brokers use this mechanism to learn the servers that host specific segments of a table.
In case of hybrid tables, the brokers ensure that the overlap between realtime and offline segment data is queried exactly once, by performing offline and realtime federation. Let's take this example, we have realtime data for 5 days - March 23 to March 27, and offline data has been pushed until Mar 25, which is 2 days behind realtime. The brokers maintain this time boundary.
Suppose, we get a query to this table : select sum(metric) from table
. The broker will split the query into 2 queries based on this time boundary - one for offline and one for realtime. This query becomes - select sum(metric) from table_REALTIME where date >= Mar 25
and
select sum(metric) from table_OFFLINE where date < Mar 25
The broker then merges results from both these queries before returning back to the client.
Make sure you've setup Zookeeper. If you're using docker, make sure to pull the pinot docker image. To start a broker
Learn about the different components and logical abstractions
This section is a reference for the definition of major components and logical abstractions used in Pinot. Please visit the Basic Concepts section to get a general overview that ties together all of the reference material in this section.
Servers host the data segments and serve queries off the data they host. There's two types of servers
Offline Offline servers are responsible for downloading segments from the segment store, to host and serve queries off. When a new segment is uploaded to the controller, the controller decides the servers (as many as replication) that will host the new segment and notifies them to download the segment from the segment store. On receiving this notification, the servers download the segment file and load the segment onto the server, to server queries off them.
Realtime Real time servers directly ingest from a real time stream (such as Kafka, EventHubs). Periodically, they make segments of the in-memory ingested data, based on certain thresholds. This segment is then persisted onto the segment store.
Pinot Servers are modeled as Helix Participants, hosting Pinot tables (referred to as resources in helix terminology). Segments of a table are modeled as Helix partitions (of a resource). Thus, a Pinot server hosts one or more helix partitions of one or more helix resources (i.e. one or more segments of one or more tables).
Make sure you've setup Zookeeper. If you're using docker, make sure to pull the pinot docker image. To start a server
>
USAGE
The Pinot Controller is responsible for a number of things
Controllers maintain the global metadata (e.g. configs and schemas) of the system with the help of Zookeeper which is used as the persistent metadata store.
Controllers host Helix Controller and is responsible for managing other pinot components (brokers, servers, minions)
They maintain the mapping of which servers are responsible for which segments. This mapping is used by the servers, to download the portion of the segments that they are responsible for. This mapping is also used by the broker to decide which servers to route the queries to.
Controller has admin endpoints for viewing, creating, updating and deleting configs which help us manage and operate the cluster.
Controllers also have endpoints for segment uploads which are used in offline data pushes. They are responsible for initializing realtime consumption and coordination of persisting the realtime segments into the segment store periodically.
They undertake other management activities such as managing retention of segments, validations.
There can be multiple instances of Pinot controller for redundancy. If there are multiple controllers, Pinot expects that all of them are configured with the same back-end storage system so that they have a common view of the segments (e.g. NFS). Pinot can use other storage systems such as HDFS or ADLS.
Make sure you've setup Zookeeper. If you're using docker, make sure to pull the pinot docker image. To start a controller
Schema is used to define the names, data types and other information for the columns of a Pinot table.
Columns in a Pinot table can be broadly categorized into three categories
Column Category
Description
Dimension
Dimension columns are typically used in slice and dice operations for answering business queries. Frequent operations done on dimension columns:
GROUP BY - group by one or more dimension columns along with aggregations on one or more metric columns
Filter processing
Metric
These columns represent quantitative data of the table. Such columns are frequently used in aggregation operations. In data warehouse terminology, these are also referred to as fact or measure columns.
Frequent operations done on metric columns:
Aggregation - SUM, MIN, MAX, COUNT, AVG etc
Filter processing
DateTime
Common operations done on time column:
GROUP BY
Filter processing
Time
This has been deprecated. Use DateTime column type for time columns.
This column represents a timestamp. There can be at most one time column in a table. Common operations done on time column:
GROUP BY
Filter processing
The time column is also used internally by Pinot, for maintaining the time boundary between offline and realtime data in a hybrid table and for retention management. A time column is mandatory if the table's push type is APPEND
and optional if the push type is REFRESH
.
A Pinot schema is written in JSON format. Here's an example which shows all the fields of a schema
The Pinot schema is composed of
schema fields
description
schemaName
Defines the name of the schema. This is usually the same as the table name. The offline and the realtime table of a hybrid table should use the same schema.
dimensionFieldSpecs
metricFieldSpecs
dateTimeFieldSpec
A dateTimeFieldSpec is defined for the time columns. There can be multiple time columns. For more details, scroll down to dateTimeFieldSpec.
timeFieldSpec
Below is a detailed description of each type of field spec.
A dimensionFieldSpec is defined for each dimension column. Here's a list of the fields in the dimensionFieldSpec
field
description
name
Name of the dimension column
dataType
Data type of the dimension column. Can be STRING, BOOLEAN, INT, LONG, DOUBLE, FLOAT, BYTES
<b></b>
defaultNullValue
Represents null values in the data, since Pinot doesn't support storing null column values natively (as part of its on-disk storage format). If not specified, an internal default null value is used as listed here
singleValueField
Boolean indicating if this is a single value or a multi value column. In the example above, the dimension tags
is multi-valued. This means that it can have multiple values for a particular row, say tag1, tag2, tag3
. For a multi-valued column, individual rows don’t necessarily need to have the same number of values. Typical use case for this would be a column such as skillSet
for a person (one row in the table) that can have multiple values such as Real Estate, Mortgages.
Data Type
Internal Default Null Value
INT
LONG
FLOAT
DOUBLE
STRING
"null"
BYTES
byte array of length 0
A metricFieldSpec is defined for each metric column. Here's a list of fields in the metricFieldSpec
field
description
name
Name of the metric column
dataType
Data type of the column. Can be INT, LONG, DOUBLE, FLOAT, BYTES (for specialized representations such as HLL, TDigest, etc, where the column stores byte serialized version of the value)
defaultNullValue
Represents null values in the data. If not specified, an internal default null value is used, as listed here. The values are the same as those used for dimensionFieldSpec.
Data Type
Internal Default Null Value
INT
0
LONG
0
FLOAT
0.0
DOUBLE
0.0
STRING
"null"
BYTES
byte array of length 0
A dateTimeFieldSpec is used to define time columns of the table. Here's a list of the fields in a dateTimeFieldSpec
field
description
name
Name of the date time column
dataType
Data type of the date time column. Can be STRING, INT, LONG
format
The format of the time column. The syntax of the format is timeSize:timeUnit:timeFormat
timeFormat can be either EPOCH or SIMPLE_DATE_FORMAT. If it is SIMPLE_DATE_FORMAT, the pattern string is also specified. For example:
1:MILLISECONDS:EPOCH - epoch millis
1:HOURS:EPOCH - epoch hours
1:DAYS:SIMPLE_DATE_FORMAT:yyyyMMdd - date specified like 20191018
1:HOURS:SIMPLE_DATE_FORMAT:EEE MMM dd HH:mm:ss ZZZ yyyy - date specified like Mon Aug 24 12:36:50 America/Los_Angeles 2019
granularity
The granularity in which the column is bucketed. The syntax of granularity is
bucket size:bucket unit
For example, the format can be milliseconds 1:MILLISECONDS:EPOCH
, but bucketed to 15 minutes i.e. we only have one value for every 15 minute interval, in which case granularity can be specified as 15:MINUTES
defaultNullValue
Represents null values in the data. If not specified, an internal default null value is used, as listed here. The values are the same as those used for dimensionFieldSpec.
This has been deprecated. Older schemas containing timeFieldSpec will be supported. But for new schemas, use DateTimeFieldSpec instead.
A timeFieldSpec is defined for the time column. A timeFieldSpec is composed of an incomingGranularitySpec and an outgoingGranularitySpec. IncomingGranularitySpec in combination with outgoingGranularitySpec can be used to transform the time column from incoming format to the outgoing format. If both of them are specified, the segment creation process will convert the time column from the incoming format to the outgoing format. If no time column transformation is required, you can specify just the incomingGranularitySpec.
timeFieldSpec fields
Description
incomingGranularitySpec
Details of the time column in the incoming data
outgoingGranularitySpec
Details of the format to which the time column should be converted for using in Pinot
The incoming and outgoing granularitySpec are defined as:
field
description
name
Name of the time column. If incomingGranularitySpec, this is the name of the time column in the incoming data. If outgoingGranularitySpec, this is the name of the column you wish to transform it to and see in Pinot
dataType
Data type of the time column. Can be INT, LONG or STRING
timeType
Indicates the time unit. Can be one of DAYS, SECONDS, HOURS, MILLISECONDS, MICROSECONDS and NANOSECONDS
timeUnitSize
Indicates the bucket length. By default 1. E.g. in the sample above outgoing time is in fiveMinutesSinceEpoch i.e. rounded to 5 minutes buckets
timeFormat
EPOCH (millisSinceEpoch, hoursSinceEpoch etc) or SIMPLE_DATE_FORMAT (yyyyMMdd, yyyyMMdd:hhssmm etc)
Apart from these, there's some advanced fields. These are common to all field specs.
field name
description
maxLength
Max length of this column
transformFunction
virtualColumnProvider
Column value provider
Transform functions can be defined on columns in the schema. For example:
Currently, we have support for 2 kinds of functions
Groovy functions
Inbuilt functions
Note
Currently, the arguments must be from the source data. They cannot be columns from the Pinot schema which have been created through transformations.
Groovy functions can be defined using the syntax:
Here's some examples of commonly needed functions. Any valid Groovy expression can be used.
Concat firstName
and lasName
to get fullName
Find max value in array bids
Convert timestamp
from MILLISECONDS
to HOURS
Simply change name of the column from user_id
to userId
If eventType
is IMPRESSION
set impression
to 1
. Similar for CLICK
.
Store an AVRO Map in Pinot as two multi-value columns. Sort the keys, to maintain the mapping.
1) The keys of the map as map_keys
2) The values of the map as map_values
We have several inbuilt functions that can be used directly in as ingestion transform functions
These are functions which enable commonly needed time transformations.
toEpochXXX
Converts from epoch milliseconds to a higher granularity.
Function name
Description
toEpochSeconds
Converts epoch millis to epoch seconds.
Usage: "transformFunction": "toEpochSeconds(millis)"
toEpochMinutes
Converts epoch millis to epoch minutes
Usage: "transformFunction": "toEpochMinutes(millis)"
toEpochHours
Converts epoch millis to epoch hours
Usage: "transformFunction": "toEpochHours(millis)"
toEpochDays
Converts epoch millis to epoch days
Usage: "transformFunction": "toEpochDays(millis)"
toEpochXXXRounded
Converts from epoch milliseconds to another granularity, rounding to the nearest rounding bucket. For example, 1588469352000
(2020-05-01 42:29:12) is 26474489
minutesSinceEpoch. `toEpochMinutesRounded(1588469352000) = 26474480
(2020-05-01 42:20:00)
Function Name
Description
toEpochSecondsRounded
Converts epoch millis to epoch seconds, rounding to nearest rounding bucket
"transformFunction": "toEpochSecondsRounded(millis, 30)"
toEpochMinutesRounded
Converts epoch millis to epoch seconds, rounding to nearest rounding bucket
"transformFunction": "toEpochMinutesRounded(millis, 10)"
toEpochHoursRounded
Converts epoch millis to epoch seconds, rounding to nearest rounding bucket
"transformFunction": "toEpochHoursRounded(millis, 6)"
toEpochDaysRounded
Converts epoch millis to epoch seconds, rounding to nearest rounding bucket
"transformFunction": "toEpochDaysRounded(millis, 7)"
fromEpochXXX
Converts from an epoch granularity to milliseconds.
Function Name
Description
fromEpochSeconds
Converts from epoch seconds to milliseconds
"transformFunction": "fromEpochSeconds(secondsSinceEpoch)"
fromEpochMinutes
Converts from epoch minutes to milliseconds
"transformFunction": "fromEpochMinutes(minutesSinceEpoch)"
fromEpochHours
Converts from epoch hours to milliseconds
"transformFunction": "fromEpochHours(hoursSinceEpoch)"
fromEpochDays
Converts from epoch days to milliseconds
"transformFunction": "fromEpochDays(daysSinceEpoch)"
Simple date format
Converts simple date format strings to milliseconds and vice-a-versa, as per the provided pattern string.
Function name
Description
toDateTime
Converts from milliseconds to a formatted date time string, as per the provided pattern
"transformFunction": "toDateTime(millis, 'yyyy-MM-dd')"
fromDateTime
Converts a formatted date time string to milliseconds, as per the provided pattern
"transformFunction": "fromDateTime(dateTimeStr, 'EEE MMM dd HH:mm:ss ZZZ yyyy')"
Function name
Description
toJsonMapStr
"transformFunction": "toJsonMapStr(jsonMapField)"
Create a schema for your data, or see examples
for examples. Make sure you've setup the cluster
Note: schema can also be created as part of table creation, refer to Creating a table.
Check out the schema in the Rest API to make sure it was successfully uploaded
This column represents time columns in the data. There can be multiple time columns in a table, but only one of them is the primary time column. Primary time column is the one that is set in the . This primary time column is used by Pinot, for maintaining the time boundary between offline and realtime data in a hybrid table and for retention management. A primary time column is mandatory if the table's push type is APPEND
and optional if the push type is REFRESH
.
A dimensionFieldSpec is defined for each dimension column. For more details, scroll down to
A metricFieldSpec is defined for each metric column. For more details, scroll down to
Deprecated. Use dateTimeFieldSpec instead. A timeFieldSpec is defined for the time column. There can only be one time column. For more details, scroll down to
Transform function to generate this column. See section .
Converts a JSON/Avro map to a string. This json map can then be queried using function.
Pinot Minion is a new component which leverages the Helix Task Framework . It can be attached to an existing Pinot cluster and then execute tasks as provided by the controller. It's a generic and single place for running background jobs. They help offload computationally intensive tasks—such as adding indexes to segments and merging segments—from other components.
A tenant is a logical component, defined as a group of server/broker nodes with the same Helix tag.
In order to support multi-tenancy, Pinot has first class support for tenants. Every table is associated with a server tenant and a broker tenant. This controls the nodes that will be used by this table as servers and brokers. This allows all tables belonging to a particular use case to be grouped under a single tenant name.
The concept of tenants is very important when the multiple use cases are using Pinot and there is a need to provide quotas or some sort of isolation across tenants. For example, consider we have two tables Table A
and Table B
in the same Pinot cluster.
We can configure Table A
with server tenant Tenant A
and Table B
with server tenant Tenant B
. We can tag some of the server nodes for Tenant A
and some for Tenant B
. This will ensure that segments of Table A
only reside on servers tagged with Tenant A
, and segment of Table B
only reside on servers tagged with Tenant B
. The same isolation can be achieved at the broker level, by configuring broker tenants to the tables.
No need to create separate clusters for every table or use case!
This tenant is defined in the tenants section of the table config.
This section contains 2 main fields broker
and server
which decide the tenants used for the broker and server components of this table.
In the above example,
The table will be served by brokers that have been tagged as brokerTenantName_BROKER
in Helix.
If this were an offline table, the offline segments for the table will be hosted in pinot servers tagged in helix as serverTenantName_OFFLINE
If this were a realtime table, the realtime segments (both consuming as well as completed ones) will be hosted in pinot servers tagged in helix as serverTenantName_REALTIME
.
Here's a sample broker tenant config. This will create a broker tenant sampleBrokerTenant
by tagging 3 untagged broker nodes as sampleBrokerTenant_BROKER
.
To create this tenant use the following command. The creation will fail if number of untagged broker nodes is less than numberOfInstances
.
Follow instructions in Getting Pinot to get Pinot locally, and then
Check out the table config in the Rest API to make sure it was successfully uploaded.
Here's a sample server tenant config. This will create a server tenant sampleServerTenant
by tagging 1 untagged server node as sampleServerTenant_OFFLINE
and 1 untagged server node as sampleServerTenant_REALTIME
.
To create this tenant use the following command. The creation will fail if number of untagged server nodes is less than offlineInstances
+ realtimeInstances
.
Follow instructions in Getting Pinot to get Pinot locally, and then
Check out the table config in the Rest API to make sure it was successfully uploaded.
Cluster is a set a nodes comprising of servers, brokers, controllers and minions.
Pinot leverages Apache Helix for cluster management. Helix is a cluster management framework to manage replicated, partitioned resources in a distributed system. Helix uses Zookeeper to store cluster state and metadata.
Briefly, Helix divides nodes into three logical components based on their responsibilities
The nodes that host distributed, partitioned resources
The nodes that observe the current state of each Participant and use that information to access the resources. Spectators are notified of state changes in the cluster (state of a participant, or that of a partition in a participant).
The node that observes and controls the Participant nodes. It is responsible for coordinating all transitions in the cluster and ensuring that state constraints are satisfied while maintaining cluster stability.
Pinot Servers are modeled as Participants, more details about server nodes can be found in Server. Pinot Brokers are modeled as Spectators, more details about broker nodes can be found in Broker. Pinot Controllers are modeled as Controllers, more details about controller nodes can be found in Controller.
Another way to visualize the cluster is a logical view, wherein a cluster contains tenants, tenants contain tables, and tables contain segments.
Typically, there is only cluster per environment/data center. There is no needed to create multiple Pinot clusters since Pinot supports the concept of tenants. At LinkedIn, the largest Pinot cluster consists of 1000+ nodes.
To setup a Pinot cluster, we need to first start Zookeeper.
Create an isolated bridge network in docker
Start Zookeeper in daemon.
Start ZKUI to browse Zookeeper data at http://localhost:9090.
Download Pinot Distribution using instructions in Download
Install zooinspector to view the data in Zookeeper, and connect to localhost:2181
Once we've started Zookeeper, we can start other components to join this cluster. If you're using docker, pull the latest apachepinot/pinot
image.
You can try out pre-built Pinot all-in-one docker image.
(Optional) You can also follow the instructions here to build your own images.
To start other components to join the cluster
Explore your cluster via Pinot Data Explorer
Pinot has the concept of table, which is a logical abstraction to refer to a collection of related data. Pinot has a distributed architecture and scales horizontally. Pinot expects the size of a table to grow infinitely over time. In order to achieve this, the entire data needs to be distributed across multiple nodes. Pinot achieve this by breaking the data into smaller chunks known as segment (this is similar to shards/partitions in relational databases). Segments can also be seen as time based partitions.
Thus, a segment is a horizontal shard representing a chunk of table data with some number of rows. The segment stores data for all columns of the table. Each segment packs the data in a columnar fashion, along with the dictionaries and indices for the columns. The segment is laid out in a columnar format so that it can be directly mapped into memory for serving queries.
Columns may be single or multi-valued. Column types may be STRING, INT, LONG, FLOAT, DOUBLE or BYTES. Columns may be declared to be metric or dimension (or specifically as a time dimension) in the schema. Columns can have default null value. For example, the default null value of a integer column can be 0. Note: The default value of byte column has to be hex-encoded before adding to the schema.
Pinot uses dictionary encoding to store values as a dictionary ID. Columns may be configured to be “no-dictionary” column in which case raw values are stored. Dictionary IDs are encoded using minimum number of bits for efficient storage (e.g. a column with cardinality of 3 will use only 2 bits for each dictionary ID).
There is a forward index built for each column and compressed appropriately for efficient memory use. In addition, optional inverted indices can be configured for any set of columns. Inverted indices, while take up more storage, offer better query performance. Specialized indexes like Star-Tree index is also supported. Check out Indexing for more details.
Once the table is configured, we can load some data. Loading data involves generating pinot segments from raw data and pushing them to the pinot cluster. Data can be loaded in batch mode or streaming mode. See ingestion overview page for details.
Below are instructions to generate and push segments to Pinot via standalone scripts. For a production setup, you should use frameworks such as Hadoop or Spark. See this page for more details on setting up Data Ingestion Jobs.
To generate a segment, we need to first create a job spec yaml file. JobSpec yaml file has all the information regarding data format, input data location and pinot cluster coordinates. Note that this assumes that the controller is RUNNING to fetch the table config and schema. If not, you will have to configure the spec to point at their location.
where,
Top level field
Description
executionFrameworkSpec
jobType
Pinot ingestion job type. Supported job types are:
SegmentCreation - only create segment
SegmentTarPush - only upload segments
SegmentUriPush -
SegmentCreationAndTarPush - create and upload segment
SegmentCreationAndUriPush -
inputDirURI
Root directory of input data, expected to have scheme configured in PinotFS.
includeFileNamePattern
Include file name pattern, supported glob pattern. E.g.
'glob:*.avro'
will include all avro files just under the inputDirURI, not sub directories
'glob:**/*.avro'
will include all the avro files under inputDirURI recursively.
excludeFileNamePattern
Exclude file name pattern, supported glob pattern. Similar usage as includeFilePatternName
outputDirURI
Root directory of output segments, expected to have scheme configured in PinotFS.
overwriteOutput
Overwrite output segments if existed.
pinotFSSpecs
recordReaderSpec
tableSpec
segmentNameGeneratorSpec
pinotClusterSpecs
pushJobSpec
field
Description
name
execution framework name
segmentGenerationJobRunnerClassName
class name implements org.apache.pinot.spi.batch.ingestion.runner.SegmentGenerationJobRunner interface.
segmentTarPushJobRunnerClassName
class name implements org.apache.pinot.spi.batch.ingestion.runner.SegmentTarPushJobRunner interface.
segmentUriPushJobRunnerClassName
class name implements org.apache.pinot.spi.batch.ingestion.runner.SegmentUriPushJobRunner interface.
extraConfigs
Map of extra configs for execution framework
field
description
schema
used to identify a PinotFS. E.g. local, hdfs, dbfs, etc
className
Class name used to create the PinotFS instance. E.g.
org.apache.pinot.spi.filesystem.LocalPinotFS
is used for local filesystem
org.apache.pinot.plugin.filesystem.HadoopPinotFS
is used for HDFS
configs
configs used to init PinotFS instance
field
description
dataFormat
Record data format, e.g. 'avro', 'parquet', 'orc', 'csv', 'json', 'thrift' etc.
className
Corresponding RecordReader class name. E.g.
org.apache.pinot.plugin.inputformat.avro.AvroRecordReader
org.apache.pinot.plugin.inputformat.csv.CSVRecordReader
org.apache.pinot.plugin.inputformat.parquet.ParquetRecordReader
org.apache.pinot.plugin.inputformat.json.JsonRecordReader
org.apache.pinot.plugin.inputformat.orc.OrcRecordReader
org.apache.pinot.plugin.inputformat.thrift.ThriftRecordReader
configClassName
Corresponding RecordReaderConfig class name, it's mandatory for CSV and Thrift file format. E.g.
org.apache.pinot.plugin.inputformat.csv.CSVRecordReaderConfig
org.apache.pinot.plugin.inputformat.thrift.ThriftRecordReaderConfig
configs
Used to init RecordReaderConfig class name, this config is required for CSV and Thrift data format.
field
description
tableName
table name
schemaURI
defines where to read the table schema, supports PinotFS or HTTP. E.g.
hdfs://path/to/table_schema.json
http://localhost:9000/tables/myTable/schema
tableConfigURI
defines where to read the table config. Supports using PinotFS or HTTP. E.g.
hdfs://path/to/table_config.json
http://localhost:9000/tables/myTable
field
description
type
supported type is simple
and normalizedDate
configs
configs to init SegmentNameGenerator
field
description
controllerURI
used to fetch table/schema information and data push.
E.g. http://localhost:9000
field
description
pushAttempts
number of attempts for push job, default is 1, which means no retry.
pushRetryIntervalMillis
retry wait Ms, default to 1 second.
pushParallelism
push job parallelism, default is 1
To create and push the segment in one go, use
Sample Console Output
Alternately, you can separately create and then push, by changing the jobType to SegmentCreation
or SegmenTarPush
.
Ingestion job spec supports templating with Groovy Syntax.
This would be convenient for users to generate one ingestion job template file and schedule it in a daily basis with extra parameters updated daily.
E.g. users can set inputDirURI
with parameters to indicate date, so that ingestion job only process the data for a particular date.
Below is an example to specify the date templating for input and output path.
Then specify the value of ${year}, ${month}, ${day}
when kicking off the ingestion job with arguments: -values $param=value1 $param2=value2
...
This ingestion job only generates segment for date 2014-01-03
Prerequisites
Below is an example of how to publish sample data to your stream. As soon as data is available to the realtime stream, it starts getting consumed by the realtime servers
Run below command to stream JSON data into Kafka topic: flights-realtime
Run below command to stream JSON data into Kafka topic: flights-realtime
A table is a logical abstraction to refer to a collection of related data. It consists of columns and rows (documents).
Data in Pinot tables is sharded into segments. A Pinot table is modeled as a Helix resource. Each segment of a table is modeled as a Helix Partition.
A table is typically associated with a schema, which is used to define the names, data types and other information of the columns of the table.
There are 3 types of a Pinot table
Table type
Description
Offline
Offline tables ingest pre-built pinot-segments from external data stores.
Realtime
Realtime tables ingest data from streams (such as Kafka) and build segments.
Hybrid
A hybrid Pinot table has both realtime as well as offline tables under the hood.
Note that the query does not know the existence of offline or realtime tables. It only specifies the table name in the query. For example, regardless of whether we have an offline table myTable_OFFLINE
, or a realtime table myTable_REALTIME
or a hybrid table containing both of these, the query will simply use mytable
as select count(*) from myTable
.
A table config file is used to define the table properties, such as name, type, indexing, routing, retention etc. It is written in JSON format, and stored in the property store in Zookeeper, along with the table schema.
Here's an example table config for an offline table
We will now discuss each section of the table config in detail.
Top level field
Description
tableName
Specifies the name of the table. Should only contain alpha-numeric characters, hyphens (‘-‘), or underscores (‘’). (Using a double-underscore (‘_’) is not allowed and reserved for other features within Pinot)
tableType
Defines the table type - OFFLINE
for offline table, REALTIME
for realtime table. A hybrid table is essentially 2 table configs one of each type, with the same table name.
quota
routing
segmentsConfig
tableIndexConfig
tenants
metadata
This section is for keeping custom configs, which are expressed as key value pairs.
quota fields
Description
storage
The maximum storage space the table is allowed to use, before replication. For example, in the above table, the storage is 140G and replication is 3. Therefore, the maximum storage the table is allowed to use is 140*3=420G. The space used by the table is calculated by adding up the sizes of all segments from every server hosting this table. Once this limit is reached, offline segment push throws a 403
exception with message, Quota check failed for segment: segment_0 of table: pinotTable
.
maxQueriesPerSecond
The maximum queries per second allowed to execute on this table. If query volume exceeds this, a 429
exception with message,Request 123 exceeds query quota for table:pinotTable, query:select count(*) from pinotTable