Running Pinot in Production

Requirements

You will need the following in order to run pinot in production:
    Hardware for controller/broker/servers as per your load
    Working installation of Zookeeper that Pinot can use. We recommend setting aside a path within zookpeer and including that path in pinot.controller.zkStr. Pinot will create its own cluster under this path (cluster name decided by pinot.controller.helixClusterName)
    Shared storage mounted on controllers (if you plan to have multiple controllers for the same cluster). Alternatively, an implementation of PinotFS that the Pinot hosts have access to.
    HTTP load balancers for spraying queries across brokers (or other mechanism to balance queries)
    HTTP load balancers for spraying controller requests (e.g. segment push, or other controller APIs) or other mechanisms for distribution of these requests.

Deploying Pinot

In general, when deploying Pinot services, it is best to adhere to a specific ordering in which the various components should be deployed. This deployment order is recommended in case of the scenario that there might be protocol or other significant differences, the deployments go out in a predictable order in which failure due to these changes can be avoided.
The ordering is as follows:
    1.
    pinot-controller
    2.
    pinot-broker
    3.
    pinot-server
    4.
    pinot-minion

Managing Pinot

Pinot provides a web-based management console and a command-line utility (pinot-admin.sh) in order to help provision and manage pinot clusters.

Pinot Management Console

The web based management console allows operations on tables, tenants, segments and schemas. You can access the console via http://controller-host:port/help. The console also allows you to enter queries for interactive debugging. Here are some screen-shots from the console.
Listing all the schemas in the Pinot cluster:
Rebalancing segments of a table:

Command line utility (pinot-admin.sh)

The command line utility (pinot-admin.sh) can be generated by running mvn install package -DskipTests -Pbin-dist in the directory in which you checked out Pinot.
Here is an example of invoking the command to create a pinot segment:
1
$ ./pinot-distribution/target/apache-pinot-0.8.0-SNAPSHOT-bin/apache-pinot-0.8.0-SNAPSHOT-bin/bin/pinot-admin.sh CreateSegment -dataDir /Users/host1/Desktop/test/ -format CSV -outDir /Users/host1/Desktop/test2/ -tableName baseballStats -segmentName baseballStats_data -overwrite -schemaFile ./pinot-distribution/target/apache-pinot-0.8.0-SNAPSHOT-bin/apache-pinot-0.8.0-SNAPSHOT-bin/sample_data/baseballStats_schema.json
2
Executing command: CreateSegment -generatorConfigFile null -dataDir /Users/host1/Desktop/test/ -format CSV -outDir /Users/host1/Desktop/test2/ -overwrite true -tableName baseballStats -segmentName baseballStats_data -timeColumnName null -schemaFile ./pinot-distribution/target/apache-pinot-0.8.0-SNAPSHOT-bin/apache-pinot-0.8.0-SNAPSHOT-bin/sample_data/baseballStats_schema.json -readerConfigFile null -enableStarTreeIndex false -starTreeIndexSpecFile null -hllSize 9 -hllColumns null -hllSuffix _hll -numThreads 1
3
Accepted files: [/Users/host1/Desktop/test/baseballStats_data.csv]
4
Finished building StatsCollector!
5
Collected stats for 97889 documents
6
Created dictionary for INT column: homeRuns with cardinality: 67, range: 0 to 73
7
Created dictionary for INT column: playerStint with cardinality: 5, range: 1 to 5
8
Created dictionary for INT column: groundedIntoDoublePlays with cardinality: 35, range: 0 to 36
9
Created dictionary for INT column: numberOfGames with cardinality: 165, range: 1 to 165
10
Created dictionary for INT column: AtBatting with cardinality: 699, range: 0 to 716
11
Created dictionary for INT column: stolenBases with cardinality: 114, range: 0 to 138
12
Created dictionary for INT column: tripples with cardinality: 32, range: 0 to 36
13
Created dictionary for INT column: hitsByPitch with cardinality: 41, range: 0 to 51
14
Created dictionary for STRING column: teamID with cardinality: 149, max length in bytes: 3, range: ALT to WSU
15
Created dictionary for INT column: numberOfGamesAsBatter with cardinality: 166, range: 0 to 165
16
Created dictionary for INT column: strikeouts with cardinality: 199, range: 0 to 223
17
Created dictionary for INT column: sacrificeFlies with cardinality: 20, range: 0 to 19
18
Created dictionary for INT column: caughtStealing with cardinality: 36, range: 0 to 42
19
Created dictionary for INT column: baseOnBalls with cardinality: 154, range: 0 to 232
20
Created dictionary for STRING column: playerName with cardinality: 11976, max length in bytes: 43, range: to Zoilo Casanova
21
Created dictionary for INT column: doules with cardinality: 64, range: 0 to 67
22
Created dictionary for STRING column: league with cardinality: 7, max length in bytes: 2, range: AA to UA
23
Created dictionary for INT column: yearID with cardinality: 143, range: 1871 to 2013
24
Created dictionary for INT column: hits with cardinality: 250, range: 0 to 262
25
Created dictionary for INT column: runsBattedIn with cardinality: 175, range: 0 to 191
26
Created dictionary for INT column: G_old with cardinality: 166, range: 0 to 165
27
Created dictionary for INT column: sacrificeHits with cardinality: 54, range: 0 to 67
28
Created dictionary for INT column: intentionalWalks with cardinality: 45, range: 0 to 120
29
Created dictionary for INT column: runs with cardinality: 167, range: 0 to 192
30
Created dictionary for STRING column: playerID with cardinality: 18107, max length in bytes: 9, range: aardsda01 to zwilldu01
31
Start building IndexCreator!
32
Finished records indexing in IndexCreator!
33
Finished segment seal!
34
Converting segment: /Users/host1/Desktop/test2/baseballStats_data_0 to v3 format
35
v3 segment location for segment: baseballStats_data_0 is /Users/host1/Desktop/test2/baseballStats_data_0/v3
36
Deleting files in v1 segment directory: /Users/host1/Desktop/test2/baseballStats_data_0
37
Driver, record read time : 369
38
Driver, stats collector time : 0
39
Driver, indexing time : 373
Copied!
Here is an example of executing a query on a Pinot table:
1
$ ./pinot-distribution/target/apache-pinot-0.8.0-SNAPSHOT-bin/apache-pinot-0.8.0-SNAPSHOT-bin/bin/pinot-admin.sh PostQuery -query "select count(*) from baseballStats"
2
Executing command: PostQuery -brokerHost [broker_host] -brokerPort [broker_port] -query select count(*) from baseballStats
3
Result: {"aggregationResults":[{"function":"count_star","value":"97889"}],"exceptions":[],"numServersQueried":1,"numServersResponded":1,"numSegmentsQueried":1,"numSegmentsProcessed":1,"numSegmentsMatched":1,"numDocsScanned":97889,"numEntriesScannedInFilter":0,"numEntriesScannedPostFilter":0,"numGroupsLimitReached":false,"totalDocs":97889,"timeUsedMs":107,"segmentStatistics":[],"traceInfo":{}}
Copied!

Monitoring Pinot

Pinot exposes several metrics to monitor the service and ensure that pinot users are not experiencing issues. In this section we discuss some of the key metrics that are useful to monitor. A full list of metrics is available in the Metrics section.

Pinot Server

    Missing Segments - NUM_MISSING_SEGMENTS
      Number of missing segments that the broker queried for (expected to be on the server) but the server didn’t have. This can be due to retention or stale routing table.
    Query latency - TOTAL_QUERY_TIME
      Total time to take from receiving to finishing executing the query.
    Query Execution Exceptions - QUERY_EXECUTION_EXCEPTIONS
      The number of exception which might have occurred during query execution.
    Realtime Consumption Status - LLC_PARTITION_CONSUMING
      This gives a binary value based on whether low-level consumption is healthy (1) or unhealthy (0). It’s important to ensure at least a single replica of each partition is consuming.
    Realtime Highest Offset Consumed - HIGHEST_STREAM_OFFSET_CONSUMED
      The highest offset which has been consumed so far.

Pinot Broker

Pinot Controller

Many of the controller metrics include a table name and thus are dynamically generated in the code. The metrics below point to the classes which generate the corresponding metrics.
To get the real metric name, the easiest route is to spin up a controller instance, create a table with the desired name and look through the generated metrics.
Todo
Give a more detailed explanation of how metrics are generated, how to identify real metrics names and where to find them in the code.
    Percent Segments Available - PERCENT_SEGMENTS_AVAILABLE
      Percentage of complete online replicas in external view as compared to replicas in ideal state.
    Segments in Error State - SEGMENTS_IN_ERROR_STATE
      Number of segments in an ERROR state for a given table.
    Last push delay - Generated in the ValidationMetrics class.
      The time in hours since the last time an offline segment has been pushed to the controller.
    Percent of replicas up - PERCENT_OF_REPLICAS
      Percentage of complete online replicas in external view as compared to replicas in ideal state.
    Table storage quota usage percent - TABLE_STORAGE_QUOTA_UTILIZATION
      Shows how much of the table’s storage quota is currently being used, metric will a percentage of a the entire quota.
Last modified 1mo ago