arrow-left

All pages
gitbookPowered by GitBook
1 of 19

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Running on public clouds

This page contains multiple quick start guides for deploying Pinot to a public cloud provider.

The following quick start guides will show you how to run an Apache Pinot cluster using Kubernetes on different public cloud providers.

Running on Azurechevron-rightRunning on GCPchevron-rightRunning on AWSchevron-right

Getting Started

This section contains quick start guides to help you get up and running with Pinot.

hashtag
Running Pinot

To simplify the getting started experience, Pinot ships with quick start guides that launch Pinot components in a single process and import pre-built datasets.

For a full list of these guides, see Quick Start Examples.

Running Pinot locallychevron-rightRunning Pinot in Dockerchevron-rightRunning in Kuberneteschevron-right

hashtag
Deploy to a public cloud

hashtag
Data import examples

Getting data into Pinot is easy. Take a look at these two quick start guides which will help you get up and running with sample data for offline and real-time .

Running Pinot locally

This quick start guide will help you bootstrap a Pinot standalone instance on your local machine.

In this guide, you'll learn how to download and install Apache Pinot as a standalone instance.

hashtag
Download Apache Pinot

First, let's download the Pinot distribution for this tutorial. You can either download a packaged release or build a distribution from the source code.

Frequently Asked Questions (FAQs)

This page has a collection of frequently asked questions with answers from the community.

circle-info

This is a list of frequent questions most often asked in our troubleshooting channel on Slack. Please feel free to contribute your questions and answers here and make a pull request.

General

FAQ for general questions around Pinot

hashtag
How does Pinot use deep storage?

When data is pushed in to Pinot, it makes a backup copy of the data and stores it on the configured deep-storage (S3/GCP/ADLS/NFS/etc). This copy is stored as tar.gz Pinot segments. Note, that pinot servers keep a (untarred) copy of the segments on their local disk as well. This is done for performance reasons.

hashtag
How does Pinot use Zookeeper?

Pinot uses Apache Helix for cluster management, which in turn is built on top of Zookeeper. Helix uses Zookeeper to store the cluster state, including Ideal State, External View, Participants, etc. Besides that, Pinot uses Zookeeper to store other information such as Table configs, schema, Segment Metadata, etc.

hashtag
Why am I getting "Could not find or load class" error when running Quickstart using 0.8.0 release?

Please check the JDK version you are using. The release 0.8.0 binary is on JDK 11. You may be getting this error if you are using JDK8. In that case, please consider using JDK11, or you will need to download the source codearrow-up-right for the release and buildarrow-up-right it locally.

Running on Azurechevron-right
Running on GCPchevron-right
Running on AWSchevron-right
tables
Batch import examplechevron-right
Stream ingestion examplechevron-right
Ingestion FAQchevron-right
Query FAQchevron-right
Operations FAQchevron-right
circle-info

Prerequisites

Install JDK11 or higher (JDK16 is not yet supported) For JDK 8 support use Pinot 0.7.1 or compile from the source code.

You can build from source or download the distribution:

Download the latest binary release from Apache Pinotarrow-up-right, or use this command

PINOT_VERSION=0.10.0 #set to the Pinot version you decide to use

wget https://downloads.apache.org/pinot/apache-pinot-

Once you have the tar file,

# untar it
tar -zxvf apache-pinot-$PINOT_VERSION-bin.tar.gz

Follow these steps to checkout code from Githubarrow-up-right and build Pinot locally

circle-info

Prerequisites

Install 3.6 or higher

circle-info

Add maven option -Djdk.version=8 when building with JDK 8

circle-info

Note that Pinot scripts is located under pinot-distribution/target not target directory under root.

hashtag
M1 Mac Support

Currently Apache Pinot doesn't provide official binaries for M1 Mac. You can however build from source using the steps provided above. In addition to the steps, you will need to add the following in your ~/.m2/settings.xml prior to the build.

Now that we've downloaded Pinot, it's time to set up a cluster. There are two ways to do this:

hashtag
Quick Start

Pinot comes with quick-start commands that launch instances of Pinot components in the same process and import pre-built datasets.

For example, the following quick-start launches Pinot with a baseball dataset pre-loaded:

For a list of all the available quick starts, see the Quick Start Examples.

hashtag
Manual Cluster

If you want to play with bigger datasets (more than a few MB), you can launch all the components individually.

The video below is a step-by-step walk through for launching the individual components of Pinot and scaling them to multiple instances.

You can find the commands that are shown in this video in the github.com/npawar/pinot-tutorialarrow-up-right GitHub repository.

circle-info

The examples below assume that you are using Java 8.

If you are using Java 11+ users, remove the GC settings insideJAVA_OPTS. So, for example, instead of:

You'd have:

hashtag
Start Zookeeper

You can use Zooinspectorarrow-up-right to browse the Zookeeper instance.

hashtag
Start Pinot Controller

hashtag
Start Pinot Broker

hashtag
Start Pinot Server

hashtag
Start Kafka

Once your cluster is up and running, you can head over to Exploring Pinot to learn how to run queries against the data.

Troubleshooting Pinot

hashtag
Is there any debug information available in Pinot?

Pinot offers various ways to assist with troubleshooting and debugging problems that might happen. It is recommended to start off with the debug api which may quickly surface some of the commonly occurring problems. The debug api provides information such as tableSize, ingestion status, any error messages related to state transition in server, among other things.

The table debug api can be invoked via the Swagger UI as follows:

Swagger - Table Debug Api

It can also be invoked directly by accessing the URL as follows. The api requires the tableName, and can optionally take tableType (offline|realtime) and verbosity level.

Pinot also provides a wide-variety of operational metrics that can be used for creating dashboards, alerting and . Also, all pinot components log debug information related to error conditions that can be used for troubleshooting.

hashtag
How do I debug a slow query or a query which keeps timing out

Please use these steps:

  1. If the query executes, look at the query result. Specifically look at numEntriesScannedInFilter and numDocsScanned.

    1. If numEntriesScannedInFilter is very high, consider adding indexes for the corresponding columns being used in the filter predicates. You should also think about partitioning the incoming data based on the dimension most heavily used in your filter queries.

hashtag

Pinot On Kubernetes FAQ

hashtag
How to increase server disk size on AWS

Below is an example of AWS EKS.

hashtag

export JAVA_OPTS="-Xms4G -Xmx8G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xloggc:gc-pinot-controller.log"
export JAVA_OPTS="-Xms4G -Xmx8G"
<activeProfiles>
  <activeProfile>
    apple-silicon
  </activeProfile>
</activeProfiles>
<profiles>
  <profile>
    <id>apple-silicon</id>
    <properties>
      <os.detected.classifier>osx-x86_64</os.detected.classifier>
    </properties>
  </profile>
</profiles>
./bin/pinot-admin.sh QuickStart -type batch
./bin/pinot-admin.sh StartZookeeper \
  -zkPort 2191
export JAVA_OPTS="-Xms4G -Xmx8G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xloggc:gc-pinot-controller.log"
./bin/pinot-admin.sh StartController \
    -zkAddress localhost:2191 \
    -controllerPort 9000
export JAVA_OPTS="-Xms4G -Xmx4G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xloggc:gc-pinot-broker.log"
./bin/pinot-admin.sh StartBroker \
    -zkAddress localhost:2191
export JAVA_OPTS="-Xms4G -Xmx16G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xloggc:gc-pinot-server.log"
./bin/pinot-admin.sh StartServer \
    -zkAddress localhost:2191
./bin/pinot-admin.sh  StartKafka \ 
  -zkAddress=localhost:2191/kafka \
  -port 19092

If numDocsScanned is very high, that means the selectivity for the query is low and lots of documents need to be processed after the filtering. Consider refining the filter to increase the selectivity of the query.

  • If the query is not executing, you can extend the query timeout by appending a timeoutMs parameter to the query (eg: select * from mytable limit 10 option(timeoutMs=60000)). Then you can repeat step 1.

  • You can also look at GC stats for the corresponding Pinot servers. If a particular server seems to be running full GC all the time, you can do a couple of things such as

    1. Increase JVM heap (Xmx)

    2. Consider using off-heap memory for segments

    3. Decrease the total number of segments per server (by partitioning the data in a better way)

  • monitoringarrow-up-right
    $PINOT_VERSION
    /apache-pinot-
    $PINOT_VERSION
    -bin.tar.gz
    # navigate to directory containing the launcher scripts
    cd apache-pinot-$PINOT_VERSION-bin
    Apache Mavenarrow-up-right
    # checkout pinot
    git clone https://github.com/apache/pinot.git
    cd pinot
    
    # build pinot
    mvn install package -DskipTests -Pbin-dist
    
    # navigate to directory containing the setup scripts
    cd build
    1. Update Storage Class

    In the K8s cluster, check the storage class: in AWS, it should be gp2.

    Then update StorageClass to ensure:

    Once StorageClass is updated, it should be like:

    hashtag
    2. Update PVC

    Once the storage class is updated, then we can update PVC for the server disk size.

    Now we want to double the disk size for pinot-server-3.

    Below is an example of current disks:

    Below is the output of data-pinot-server-3

    PVC data-pinot-server-3

    Now, let's change the PVC size to 2T by editing the server PVC.

    Once updated, the spec's PVC size is updated to 2T, but the status's PVC size is still 1T.

    hashtag
    3. Restart pod to let it reflect

    Restart pinot-server-3 pod:

    Recheck PVC size:

    Running on GCP

    This starter provides a quick start for running Pinot on Google Cloud Platform (GCP)

    This document provides the basic instruction to set up a Kubernetes Cluster on Google Kubernetes Engine(GKE)arrow-up-right

    hashtag
    1. Tooling Installation

    hashtag
    1.1 Install Kubectl

    Please follow this link () to install kubectl.

    For Mac User

    Please check kubectl version after installation.

    circle-info

    QuickStart scripts are tested under kubectl client version v1.16.3 and server version v1.13.12

    hashtag
    1.2 Install Helm

    Please follow this link () to install helm.

    For Mac User

    Please check helm version after installation.

    circle-info

    This QuickStart provides helm supports for helm v3.0.0 and v2.12.1. Please pick the script based on your helm version.

    hashtag
    1.3 Install Google Cloud SDK

    Please follow this link () to install Google Cloud SDK.

    hashtag
    1.3.1 For Mac User

    • Install Google Cloud SDK

    • Restart your shell

    hashtag
    2. (Optional) Initialize Google Cloud Environment

    hashtag
    3. (Optional) Create a Kubernetes cluster(GKE) in Google Cloud

    Below script will create a 3 nodes cluster named pinot-quickstart in us-west1-b with n1-standard-2 machines for demo purposes.

    Please modify the parameters in the example command below:

    You can monitor cluster status by command:

    Once the cluster is in RUNNING status, it's ready to be used.

    hashtag
    4. Connect to an existing cluster

    Simply run below command to get the credential for the cluster pinot-quickstart that you just created or your existing cluster.

    To verify the connection, you can run:

    hashtag
    5. Pinot Quickstart

    Please follow this to deploy your Pinot Demo.

    hashtag
    6. Delete a Kubernetes Cluster

    Running on Azure

    This starter guide provides a quick start for running Pinot on Microsoft Azure

    This document provides the basic instruction to set up a Kubernetes Cluster on Azure Kubernetes Service (AKS)arrow-up-right

    hashtag
    1. Tooling Installation

    hashtag
    1.1 Install Kubectl

    Please follow this link () to install kubectl.

    For Mac User

    Please check kubectl version after installation.

    circle-info

    QuickStart scripts are tested under kubectl client version v1.16.3 and server version v1.13.12

    hashtag
    1.2 Install Helm

    Please follow this link () to install helm.

    For Mac User

    Please check helm version after installation.

    circle-info

    This QuickStart provides helm supports for helm v3.0.0 and v2.12.1. Please pick the script based on your helm version.

    hashtag
    1.3 Install Azure CLI

    Please follow this link () to install Azure CLI.

    For Mac User

    hashtag
    2. (Optional) Login to your Azure account

    Below script will open default browser to sign-in to your Azure Account.

    hashtag
    3. (Optional) Create a Resource Group

    Below script will create a resource group in location eastus.

    hashtag
    4. (Optional) Create a Kubernetes cluster(AKS) in Azure

    Below script will create a 3 nodes cluster named pinot-quickstart for demo purposes.

    Please modify the parameters in the example command below:

    Once the command is succeed, it's ready to be used.

    hashtag
    5. Connect to an existing cluster

    Simply run below command to get the credential for the cluster pinot-quickstart that you just created or your existing cluster.

    To verify the connection, you can run:

    hashtag
    6. Pinot Quickstart

    Please follow this to deploy your Pinot Demo.

    hashtag
    7. Delete a Kubernetes Cluster

    Running Pinot in Docker

    This guide will show you to run a Pinot Cluster using Docker.

    In this guide we will learn about running Pinot in Docker.

    This guide assumes that you have installed and have configured it with enough memory. A sample config is shown below:

    The latest Pinot Docker image is published at apachepinot/pinot:latest and you can see a list of .

    You can pull the Docker image onto your machine by running the following command:

    Or if you want to use a specific version:

    Running on AWS

    This guide provides a quick start for running Pinot on Amazon Web Services (AWS).

    This document provides the basic instruction to set up a Kubernetes Cluster on

    hashtag
    1. Tooling Installation

    hashtag

    HDFS as Deep Storage

    This guide helps to setup HDFS as deepstorage for Pinot Segment.

    To use HDFS as deep storage you need to include HDFS dependency jars and plugins.

    hashtag
    Server Setup

    hashtag

    Query FAQ

    hashtag
    Querying

    hashtag
    I get the following error when running a query, what does it mean?

    curl -X GET "http://localhost:9000/debug/tables/airlineStats?verbosity=0" -H "accept: application/json"
    allowVolumeExpansion: true
    kubectl edit pvc data-pinot-server-3 -n pinot
    https://kubernetes.io/docs/tasks/tools/install-kubectlarrow-up-right
    https://helm.sh/docs/using_helm/#installing-helmarrow-up-right
    https://cloud.google.com/sdk/installarrow-up-right
    Kubernetes QuickStart
    https://kubernetes.io/docs/tasks/tools/install-kubectlarrow-up-right
    https://helm.sh/docs/using_helm/#installing-helmarrow-up-right
    https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latestarrow-up-right
    Kubernetes QuickStart
    1.1 Install Kubectl

    Please follow this link (https://kubernetes.io/docs/tasks/tools/install-kubectlarrow-up-right) to install kubectl.

    For Mac User

    Please check kubectl version after installation.

    circle-info

    QuickStart scripts are tested under kubectl client version v1.16.3 and server version v1.13.12

    hashtag
    1.2 Install Helm

    Please follow this link (https://helm.sh/docs/using_helm/#installing-helmarrow-up-right) to install helm.

    For Mac User

    Please check helm version after installation.

    circle-info

    This QuickStart provides helm supports for helm v3.0.0 and v2.12.1. Please pick the script based on your helm version.

    hashtag
    1.3 Install AWS CLI

    Please follow this link (https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html#install-tool-bundledarrow-up-right) to install AWS CLI.

    For Mac User

    hashtag
    1.4 Install Eksctl

    Please follow this link (https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html#installing-eksctlarrow-up-right) to install AWS CLI.

    For Mac User

    hashtag
    2. (Optional) Login to your AWS account.

    For first time AWS user, please register your account at https://aws.amazon.com/arrow-up-right.

    Once created the account, you can go to AWS Identity and Access Management (IAM)arrow-up-right to create a user and create access keys under Security Credential tab.

    circle-info

    Environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY will override AWS configuration stored in file ~/.aws/credentials

    hashtag
    3. (Optional) Create a Kubernetes cluster(EKS) in AWS

    The script below will create a 1 node cluster named pinot-quickstart in us-west-2 with a t3.xlarge machine for demo purposes:

    You can monitor the cluster status via this command:

    Once the cluster is in ACTIVE status, it's ready to be used.

    hashtag
    4. Connect to an existing cluster

    Simply run below command to get the credential for the cluster pinot-quickstart that you just created or your existing cluster.

    To verify the connection, you can run:

    hashtag
    5. Pinot Quickstart

    Please follow this Kubernetes QuickStart to deploy your Pinot Demo.

    hashtag
    6. Delete a Kubernetes Cluster

    Amazon Elastic Kubernetes Service (Amazon EKS)arrow-up-right
    Configuration.

    hashtag
    Executable.

    hashtag
    Controller Setup

    hashtag
    Configuration.

    hashtag
    Executable.

    hashtag
    Broker Setup

    hashtag
    Configuration.

    hashtag
    Executable.

    This essentially implies that the Pinot Broker assigned to the table specified in the query was not found. A common root cause for this is a typo in the table name in the query. Another uncommon reason could be if there wasn't actually a broker with required broker tenant tag for the table.

    hashtag
    What are all the fields in the Pinot query's JSON response?

    Here's the page explaining the Pinot response format: https://docs.pinot.apache.org/users/api/querying-pinot-using-standard-sql/response-formatarrow-up-right

    hashtag
    SQL Query fails with "Encountered 'timestamp' was expecting one of..."

    "timestamp" is a reserved keyword in SQL. Escape timestamp with double quotes.

    Other commonly encountered reserved keywords are date, time, table.

    hashtag
    Filtering on STRING column WHERE column = "foo" does not work?

    For filtering on STRING columns, use single quotes

    hashtag
    ORDER BY using an alias doesn't work?

    The fields in the ORDER BY clause must be one of the group by clauses or aggregations, BEFORE applying the alias. Therefore, this will not work

    Instead, this will work

    hashtag
    Does pagination work in GROUP BY queries?

    No. Pagination only works for SELECTION queries

    hashtag
    How do I increase timeout for a query ?

    You can add this at the end of your query: option(timeoutMs=X). For eg: the following example will use a timeout of 20 seconds for the query:

    hashtag
    How do I optimize my Pinot table for doing aggregations and group-by on high cardinality columns ?

    In order to speed up aggregations, you can enable metrics aggregation on the required column by adding a metric fieldarrow-up-right in the corresponding schema and setting aggregateMetrics to true in the table config. You can also use a star-tree index config for such columns (read more about star-tree herearrow-up-right)

    hashtag
    How do I verify that an index is created on a particular column ?

    There are 2 ways to verify this:

    1. Log in to a server that hosts segments of this table. Inside the data directory, locate the segment directory for this table. In this directory, there is a file named index_map which lists all the indexes and other data structures created for each segment. Verify that the requested index is present here.

    2. During query: Use the column in the filter predicate and check the value of numEntriesScannedInFilter . If this value is 0, then indexing is working as expected (works for Inverted index)

    hashtag
    Does Pinot use a default value for LIMIT in queries?

    Yes, Pinot uses a default value of LIMIT 10 in queries. The reason behind this default value is to avoid unintentionally submitting expensive queries that end up fetching or processing a lot of data from Pinot. Users can always overwrite this by explicitly specifying a LIMIT value.

    hashtag
    Does Pinot cache query results?

    Pinot does not cache query results, each query is computed in its entirety. Note though, running the same or similar query multiple times will naturally pull in segment pages into memory making subsequent calls faster. Also, for realtime systems, the data is changing in realtime, so results cannot be cached. For offline-only systems, caching layer can be built on top of Pinot, with invalidation mechanism built-in to invalidate the cache when data is pushed into Pinot.

    hashtag
    How do I determine if StarTree index is being used for my query?

    The query execution engine will prefer to use StarTree index for all queries where it can be used. The criteria to determine whether StarTree index can be used is as follows:

    • All aggregation function + column pairs in the query must exist in the StarTree index.

    • All dimensions that appear in filter predicates and group-by should be StarTree dimensions.

    For queries where above is true, StarTree index is used. For other queries, the execution engine will default to using the next best index available.

    brew install kubernetes-cli
    kubectl version
    brew install kubernetes-helm
    helm version
    curl https://sdk.cloud.google.com | bash
    exec -l $SHELL
    gcloud init
    GCLOUD_PROJECT=[your gcloud project name]
    GCLOUD_ZONE=us-west1-b
    GCLOUD_CLUSTER=pinot-quickstart
    GCLOUD_MACHINE_TYPE=n1-standard-2
    GCLOUD_NUM_NODES=3
    gcloud container clusters create ${GCLOUD_CLUSTER} \
      --num-nodes=${GCLOUD_NUM_NODES} \
      --machine-type=${GCLOUD_MACHINE_TYPE} \
      --zone=${GCLOUD_ZONE} \
      --project=${GCLOUD_PROJECT}
    gcloud compute instances list
    GCLOUD_PROJECT=[your gcloud project name]
    GCLOUD_ZONE=us-west1-b
    GCLOUD_CLUSTER=pinot-quickstart
    gcloud container clusters get-credentials ${GCLOUD_CLUSTER} --zone ${GCLOUD_ZONE} --project ${GCLOUD_PROJECT}
    kubectl get nodes
    GCLOUD_ZONE=us-west1-b
    gcloud container clusters delete pinot-quickstart --zone=${GCLOUD_ZONE}
    brew install kubernetes-cli
    kubectl version
    brew install kubernetes-helm
    helm version
    brew update && brew install azure-cli
    az login
    AKS_RESOURCE_GROUP=pinot-demo
    AKS_RESOURCE_GROUP_LOCATION=eastus
    az group create --name ${AKS_RESOURCE_GROUP} \
                    --location ${AKS_RESOURCE_GROUP_LOCATION}
    AKS_RESOURCE_GROUP=pinot-demo
    AKS_CLUSTER_NAME=pinot-quickstart
    az aks create --resource-group ${AKS_RESOURCE_GROUP} \
                  --name ${AKS_CLUSTER_NAME} \
                  --node-count 3
    AKS_RESOURCE_GROUP=pinot-demo
    AKS_CLUSTER_NAME=pinot-quickstart
    az aks get-credentials --resource-group ${AKS_RESOURCE_GROUP} \
                           --name ${AKS_CLUSTER_NAME}
    kubectl get nodes
    AKS_RESOURCE_GROUP=pinot-demo
    AKS_CLUSTER_NAME=pinot-quickstart
    az aks delete --resource-group ${AKS_RESOURCE_GROUP} \
                  --name ${AKS_CLUSTER_NAME}
    brew install kubernetes-cli
    kubectl version
    brew install kubernetes-helm
    helm version
    curl "https://d1vvhvl2y92vvt.cloudfront.net/awscli-exe-macos.zip" -o "awscliv2.zip"
    unzip awscliv2.zip
    sudo ./aws/install
    brew tap weaveworks/tap
    brew install weaveworks/tap/eksctl
    aws configure
    EKS_CLUSTER_NAME=pinot-quickstart
    eksctl create cluster \
    --name ${EKS_CLUSTER_NAME} \
    --version 1.16 \
    --region us-west-2 \
    --nodegroup-name standard-workers \
    --node-type t3.xlarge \
    --nodes 1 \
    --nodes-min 1 \
    --nodes-max 1
    EKS_CLUSTER_NAME=pinot-quickstart
    aws eks describe-cluster --name ${EKS_CLUSTER_NAME} --region us-west-2
    EKS_CLUSTER_NAME=pinot-quickstart
    aws eks update-kubeconfig --name ${EKS_CLUSTER_NAME}
    kubectl get nodes
    EKS_CLUSTER_NAME=pinot-quickstart
    aws eks delete-cluster --name ${EKS_CLUSTER_NAME}
    pinot.server.instance.enable.split.commit=true
    pinot.server.storage.factory.class.hdfs=org.apache.pinot.plugin.filesystem.HadoopPinotFS
    pinot.server.storage.factory.hdfs.hadoop.conf.path=/path/to/hadoop/conf/directory/
    pinot.server.segment.fetcher.protocols=file,http,hdfs
    pinot.server.segment.fetcher.hdfs.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
    pinot.server.segment.fetcher.hdfs.hadoop.kerberos.principle=<your kerberos principal>
    pinot.server.segment.fetcher.hdfs.hadoop.kerberos.keytab=<your kerberos keytab>
    pinot.set.instance.id.to.hostname=true
    pinot.server.instance.dataDir=/path/in/local/filesystem/for/pinot/data/server/index
    pinot.server.instance.segmentTarDir=/path/in/local/filesystem/for/pinot/data/server/segment
    pinot.server.grpc.enable=true
    pinot.server.grpc.port=8090
    export HADOOP_HOME=/path/to/hadoop/home
    export HADOOP_VERSION=2.7.1
    export HADOOP_GUAVA_VERSION=11.0.2
    export HADOOP_GSON_VERSION=2.2.4
    export GC_LOG_LOCATION=/path/to/gc/log/file
    export PINOT_VERSION=0.10.0
    export PINOT_DISTRIBUTION_DIR=/path/to/apache-pinot-${PINOT_VERSION}-bin/
    export SERVER_CONF_DIR=/path/to/pinot/conf/dir/
    export ZOOKEEPER_ADDRESS=localhost:2181
    
    
    export CLASSPATH_PREFIX="${HADOOP_HOME}/share/hadoop/hdfs/hadoop-hdfs-${HADOOP_VERSION}.jar:${HADOOP_HOME}/share/hadoop/common/lib/hadoop-annotations-${HADOOP_VERSION}.jar:${HADOOP_HOME}/share/hadoop/common/lib/hadoop-auth-${HADOOP_VERSION}.jar:${HADOOP_HOME}/share/hadoop/common/hadoop-common-${HADOOP_VERSION}.jar:${HADOOP_HOME}/share/hadoop/common/lib/guava-${HADOOP_GUAVA_VERSION}.jar:${HADOOP_HOME}/share/hadoop/common/lib/gson-${HADOOP_GSON_VERSION}.jar"
    export JAVA_OPTS="-Xms4G -Xmx16G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xloggc:${GC_LOG_LOCATION}/gc-pinot-server.log"
    ${PINOT_DISTRIBUTION_DIR}/bin/start-server.sh  -zkAddress ${ZOOKEEPER_ADDRESS} -configFileName ${SERVER_CONF_DIR}/server.conf
    controller.data.dir=hdfs://path/in/hdfs/for/controller/segment
    controller.local.temp.dir=/tmp/pinot/
    controller.zk.str=<ZOOKEEPER_HOST:ZOOKEEPER_PORT>
    controller.enable.split.commit=true
    controller.access.protocols.http.port=9000
    controller.helix.cluster.name=PinotCluster
    pinot.controller.storage.factory.class.hdfs=org.apache.pinot.plugin.filesystem.HadoopPinotFS
    pinot.controller.storage.factory.hdfs.hadoop.conf.path=/path/to/hadoop/conf/directory/
    pinot.controller.segment.fetcher.protocols=file,http,hdfs
    pinot.controller.segment.fetcher.hdfs.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
    pinot.controller.segment.fetcher.hdfs.hadoop.kerberos.principle=<your kerberos principal>
    pinot.controller.segment.fetcher.hdfs.hadoop.kerberos.keytab=<your kerberos keytab>
    controller.vip.port=9000
    controller.port=9000
    pinot.set.instance.id.to.hostname=true
    pinot.server.grpc.enable=true
    export HADOOP_HOME=/path/to/hadoop/home
    export HADOOP_VERSION=2.7.1
    export HADOOP_GUAVA_VERSION=11.0.2
    export HADOOP_GSON_VERSION=2.2.4
    export GC_LOG_LOCATION=/path/to/gc/log/file
    export PINOT_VERSION=0.10.0
    export PINOT_DISTRIBUTION_DIR=/path/to/apache-pinot-${PINOT_VERSION}-bin/
    export SERVER_CONF_DIR=/path/to/pinot/conf/dir/
    export ZOOKEEPER_ADDRESS=localhost:2181
    
    
    export CLASSPATH_PREFIX="${HADOOP_HOME}/share/hadoop/hdfs/hadoop-hdfs-${HADOOP_VERSION}.jar:${HADOOP_HOME}/share/hadoop/common/lib/hadoop-annotations-${HADOOP_VERSION}.jar:${HADOOP_HOME}/share/hadoop/common/lib/hadoop-auth-${HADOOP_VERSION}.jar:${HADOOP_HOME}/share/hadoop/common/hadoop-common-${HADOOP_VERSION}.jar:${HADOOP_HOME}/share/hadoop/common/lib/guava-${HADOOP_GUAVA_VERSION}.jar:${HADOOP_HOME}/share/hadoop/common/lib/gson-${HADOOP_GSON_VERSION}.jar"
    export JAVA_OPTS="-Xms8G -Xmx12G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xloggc:${GC_LOG_LOCATION}/gc-pinot-controller.log"
    ${PINOT_DISTRIBUTION_DIR}/bin/start-controller.sh -configFileName ${SERVER_CONF_DIR}/controller.conf
    pinot.set.instance.id.to.hostname=true
    pinot.server.grpc.enable=true
    export HADOOP_HOME=/path/to/hadoop/home
    export HADOOP_VERSION=2.7.1
    export HADOOP_GUAVA_VERSION=11.0.2
    export HADOOP_GSON_VERSION=2.2.4
    export GC_LOG_LOCATION=/path/to/gc/log/file
    export PINOT_VERSION=0.10.0
    export PINOT_DISTRIBUTION_DIR=/path/to/apache-pinot-${PINOT_VERSION}-bin/
    export SERVER_CONF_DIR=/path/to/pinot/conf/dir/
    export ZOOKEEPER_ADDRESS=localhost:2181
    
    
    export CLASSPATH_PREFIX="${HADOOP_HOME}/share/hadoop/hdfs/hadoop-hdfs-${HADOOP_VERSION}.jar:${HADOOP_HOME}/share/hadoop/common/lib/hadoop-annotations-${HADOOP_VERSION}.jar:${HADOOP_HOME}/share/hadoop/common/lib/hadoop-auth-${HADOOP_VERSION}.jar:${HADOOP_HOME}/share/hadoop/common/hadoop-common-${HADOOP_VERSION}.jar:${HADOOP_HOME}/share/hadoop/common/lib/guava-${HADOOP_GUAVA_VERSION}.jar:${HADOOP_HOME}/share/hadoop/common/lib/gson-${HADOOP_GSON_VERSION}.jar"
    export JAVA_OPTS="-Xms4G -Xmx4G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xloggc:${GC_LOG_LOCATION}/gc-pinot-broker.log"
    ${PINOT_DISTRIBUTION_DIR}/bin/start-broker.sh -zkAddress ${ZOOKEEPER_ADDRESS} -configFileName  ${SERVER_CONF_DIR}/broker.conf
    {'errorCode': 410, 'message': 'BrokerResourceMissingError'}
    select "timestamp" from myTable
    SELECT COUNT(*) from myTable WHERE column = 'foo'
    SELECT count(colA) as aliasA, colA from tableA GROUP BY colA ORDER BY aliasA
    SELECT count(colA) as sumA, colA from tableA GROUP BY colA ORDER BY count(colA)
    SELECT COUNT(*) from myTable option(timeoutMs=20000)

    Now that we've downloaded the Pinot Docker image, it's time to set up a cluster. There are two ways to do this:

    hashtag
    Quick Start

    Pinot comes with quick-start commands that launch instances of Pinot components in the same process and import pre-built datasets.

    For example, the following quick-start launches Pinot with a baseball dataset pre-loaded:

    For a list of all the available quick starts, see the Quick Start Examples.

    hashtag
    Manual Cluster

    The quick start scripts launch Pinot with minimal resources. If you want to play with bigger datasets (more than a few MB), you can launch each of the Pinot components individually.

    hashtag
    Docker

    hashtag
    Create a Network

    Create an isolated bridge network in docker

    hashtag
    Start Zookeeper

    Start Zookeeper in daemon mode. This is a single node zookeeper setup. Zookeeper is the central metadata store for Pinot and should be set up with replication for production use. For more information, see Running Replicated Zookeeperarrow-up-right.

    hashtag
    Start Pinot Controller

    Start Pinot Controller in daemon and connect to Zookeeper.

    circle-info

    The command below expects a 4GB memory container. Tune-Xms and-Xmx if your machine doesn't have enough resources.

    hashtag
    Start Pinot Broker

    Start Pinot Broker in daemon and connect to Zookeeper.

    circle-info

    The command below expects a 4GB memory container. Tune-Xms and-Xmx if your machine doesn't have enough resources.

    hashtag
    Start Pinot Server

    Start Pinot Server in daemon and connect to Zookeeper.

    circle-info

    The command below expects a 16GB memory container. Tune-Xms and-Xmx if your machine doesn't have enough resources.

    hashtag
    Start Kafka

    Optionally, you can also start Kafka for setting up realtime streams. This brings up the Kafka broker on port 9092.

    Now all Pinot related components are started as an empty cluster.

    You can run the below command to check container status.

    Sample Console Output

    hashtag
    Docker Compose

    Create a file called docker-compose.yml that contains the following:

    Run the following command to launch all the components:

    You can run the below command to check container status.

    Sample Console Output

    Once your cluster is up and running, you can head over to Exploring Pinot to learn how to run queries against the data.

    circle-info

    If you have minikubearrow-up-right or Docker Kubernetesarrow-up-right installed, you could also try running the Kubernetes quick start.

    Dockerarrow-up-right
    all published tags on Docker Hubarrow-up-right
    Sample Docker resources
    docker pull apachepinot/pinot:latest
    docker pull apachepinot/pinot:0.9.3
    docker run \
        -p 9000:9000 \
        apachepinot/pinot:0.9.3 QuickStart \
        -type batch
    docker network create -d bridge pinot-demo_default
    docker run \
        --network=pinot-demo_default \
        --name  manual-pinot-zookeeper \
        --restart always \
        -p 2181:2181 \
        -d zookeeper:3.5.6
    docker run --rm -ti \
        --network=pinot-demo_default \
        --name manual-pinot-controller \
        -p 9000:9000 \
        -e JAVA_OPTS="-Dplugins.dir=/opt/pinot/plugins -Xms1G -Xmx4G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xloggc:gc-pinot-controller.log" \
        -d ${PINOT_IMAGE} StartController \
        -zkAddress pinot-zookeeper:2181
    docker run --rm -ti \
        --network=pinot-demo_default \
        --name manual-pinot-broker \
        -p 8099:8099 \
        -e JAVA_OPTS="-Dplugins.dir=/opt/pinot/plugins -Xms4G -Xmx4G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xloggc:gc-pinot-broker.log" \
        -d ${PINOT_IMAGE} StartBroker \
        -zkAddress pinot-zookeeper:2181
    docker run --rm -ti \
        --network=pinot-demo_default \
        --name manual-pinot-server \
        -e JAVA_OPTS="-Dplugins.dir=/opt/pinot/plugins -Xms4G -Xmx16G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xloggc:gc-pinot-server.log" \
        -d ${PINOT_IMAGE} StartServer \
        -zkAddress pinot-zookeeper:2181
    docker run --rm -ti \
        --network pinot-demo_default --name=kafka \
        -e KAFKA_ZOOKEEPER_CONNECT=pinot-zookeeper:2181/kafka \
        -e KAFKA_BROKER_ID=0 \
        -e KAFKA_ADVERTISED_HOST_NAME=kafka \
        -d wurstmeister/kafka:latest
    docker container ls -a
    CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS              PORTS                                                  NAMES
    9ec20e4463fa        wurstmeister/kafka:latest   "start-kafka.sh"         43 minutes ago      Up 43 minutes                                                              kafka
    0775f5d8d6bf        apachepinot/pinot:latest    "./bin/pinot-admin.s…"   44 minutes ago      Up 44 minutes       8096-8099/tcp, 9000/tcp                                pinot-server
    64c6392b2e04        apachepinot/pinot:latest    "./bin/pinot-admin.s…"   44 minutes ago      Up 44 minutes       8096-8099/tcp, 9000/tcp                                pinot-broker
    b6d0f2bd26a3        apachepinot/pinot:latest    "./bin/pinot-admin.s…"   45 minutes ago      Up 45 minutes       8096-8099/tcp, 0.0.0.0:9000->9000/tcp                  pinot-quickstart
    570416fc530e        zookeeper:3.5.6             "/docker-entrypoint.…"   45 minutes ago      Up 45 minutes       2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp, 8080/tcp   pinot-zookeeper
    docker-compose.yml
    version: '3.7'
    services:
      zookeeper:
        image: zookeeper:3.5.6
        hostname: zookeeper
        container_name: manual-zookeeper
        ports:
          - "2181:2181"
        environment:
          ZOOKEEPER_CLIENT_PORT: 2181
          ZOOKEEPER_TICK_TIME: 2000
      pinot-controller:
        image: apachepinot/pinot:0.9.3
        command: "StartController -zkAddress manual-zookeeper:2181"
        container_name: "manual-pinot-controller"
        restart: unless-stopped
        ports:
          - "9000:9000"
        environment:
          JAVA_OPTS: "-Dplugins.dir=/opt/pinot/plugins -Xms1G -Xmx4G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xloggc:gc-pinot-controller.log"
        depends_on:
          - zookeeper
      pinot-broker:
        image: apachepinot/pinot:0.9.3
        command: "StartBroker -zkAddress manual-zookeeper:2181"
        restart: unless-stopped
        container_name: "manual-pinot-broker"
        ports:
          - "8099:8099"
        environment:
          JAVA_OPTS: "-Dplugins.dir=/opt/pinot/plugins -Xms4G -Xmx4G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xloggc:gc-pinot-broker.log"
        depends_on:
          - pinot-controller
      pinot-server:
        image: apachepinot/pinot:0.9.3
        command: "StartServer -zkAddress manual-zookeeper:2181"
        restart: unless-stopped
        container_name: "manual-pinot-server" 
        environment:
          JAVA_OPTS: "-Dplugins.dir=/opt/pinot/plugins -Xms4G -Xmx16G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xloggc:gc-pinot-server.log"
        depends_on:
          - pinot-broker
      
    docker-compose --project-name pinot-demo up
    docker container ls 
    CONTAINER ID   IMAGE                     COMMAND                  CREATED              STATUS              PORTS                                                                     NAMES
    ba5cb0868350   apachepinot/pinot:0.9.3   "./bin/pinot-admin.s…"   About a minute ago   Up About a minute   8096-8099/tcp, 9000/tcp                                                   manual-pinot-server
    698f160852f9   apachepinot/pinot:0.9.3   "./bin/pinot-admin.s…"   About a minute ago   Up About a minute   8096-8098/tcp, 9000/tcp, 0.0.0.0:8099->8099/tcp, :::8099->8099/tcp        manual-pinot-broker
    b1ba8cf60d69   apachepinot/pinot:0.9.3   "./bin/pinot-admin.s…"   About a minute ago   Up About a minute   8096-8099/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp                  manual-pinot-controller
    54e7e114cd53   zookeeper:3.5.6           "/docker-entrypoint.…"   About a minute ago   Up About a minute   2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp, :::2181->2181/tcp, 8080/tcp   manual-zookeeper
    Neha Pawar from the Apache Pinot team shows you how to setup a Pinot cluster

    Stream ingestion example

    The Docker instructions on this page are still WIP

    So far, we setup our cluster, ran some queries on the demo tables and explored the admin endpoints. We also uploaded some sample batch data for transcript table.

    Now, it's time to ingest from a sample stream into Pinot. The rest of the instructions assume you're using Pinot in Dockerarrow-up-right.

    hashtag
    Data Stream

    First, we need to setup a stream. Pinot has out-of-the-box realtime ingestion support for Kafka. Other streams can be plugged in, more details in .

    Let's setup a demo Kafka cluster locally, and create a sample topic transcript-topic

    Start Kafka

    Create a Kafka Topic

    Start Kafka

    Start Kafka cluster on port 9876 using the same Zookeeper from the quick-start examples

    Create a Kafka topic

    hashtag
    Creating a Schema

    If you followed the , you have already pushed a schema for your sample table. If not, head over to on that page, to learn how to create a schema for your sample data.

    hashtag
    Creating a table config

    If you followed , you learnt how to push an offline table and schema. Similar to the offline table config, we will create a realtime table config for the sample. Here's the realtime table config for the transcript table. For a more detailed overview about table, checkout .

    hashtag
    Uploading your schema and table config

    Now that we have our table and schema, let's upload them to the cluster. As soon as the realtime table is created, it will begin ingesting from the Kafka topic.

    hashtag
    Loading sample data into stream

    Here's a JSON file for transcript table data:

    Push sample JSON into Kafka topic, using the Kafka script from the Kafka download

    hashtag
    Ingesting streaming data

    As soon as data flows into the stream, the Pinot table will consume it and it will be ready for querying. Head over to the to checkout the realtime data

    Ingestion FAQ

    hashtag
    Data processing

    hashtag
    What is a good segment size?

    While Pinot can work with segments of various sizes, for optimal use of Pinot, you want to get your segments sized in the 100MB to 500MB (un-tarred/uncompressed) range. Please note that having too many (thousands or more) of tiny segments for a single table just creates more overhead in terms of the metadata storage in Zookeeper as well as in the Pinot servers' heap. At the same time, having too few really large (GBs) segments reduces parallelism of query execution, as on the server side, the thread parallelism of query execution is at segment level.

    hashtag
    Can multiple Pinot tables consume from the same Kafka topic?

    Yes. Each table can be independently configured to consume from any given Kafka topic, regardless of whether there are other tables that are also consuming from the same Kafka topic.

    hashtag
    How do I enable partitioning in Pinot, when using Kafka stream?

    Setup partitioner in the Kafka producer: https://docs.confluent.io/current/clients/producer.htmlarrow-up-right

    The partitioning logic in the stream should match the partitioning config in Pinot. Kafka uses murmur2, and the equivalent in Pinot is Murmur function.

    Set partitioning config as below using same column used in Kafka

    and also set

    More details about how partitioner works in Pinot here.

    hashtag
    How do I store BYTES column in JSON data?

    For JSON, you can use hex encoded string to ingest BYTES

    hashtag
    How do I flatten my JSON Kafka stream?

    We have json_format(field)arrow-up-right function which can store a top level json field as a STRING in Pinot.

    Then you can use these json functionsarrow-up-right during query time, to extract fields from the json string.

    circle-exclamation

    NOTE This works well if some of your fields are nested json, but most of your fields are top level json keys. If all of your fields are within a nested JSON key, you will have to store the entire payload as 1 column, which is not ideal.

    Support for flattening during ingestion is on the roadmap: https://github.com/apache/pinot/issues/5264arrow-up-right

    hashtag
    How do I escape Unicode in my Job Spec YAML file?

    To use explicit code points, you must double-quote (not single-quote) the string, and escape the code point via "\uHHHH", where HHHH is the four digit hex code for the character. See https://yaml.org/spec/spec.html#escaping/in%20double-quoted%20scalars/arrow-up-right for more details.

    hashtag
    Is there a limit on the maximum length of a string column in Pinot?

    By default, Pinot limits the length of a String column to 512 bytes. If you want to overwrite this value, you can set the maxLength attribute in the schema as follows:

    hashtag
    When can new events become queryable when getting ingested into a real-time table?

    Events are available to be read by queries as soon as they are ingested. This is because events are instantly indexed in-memory upon ingestion.

    The ingestion of events into the real-time table is not transactional, so replicas of the open segment are not immediately consistent. Pinot trades consistency for availability upon network partitioning (CAP theorem) to provide ultra-low ingestion latencies at high throughput.

    However, when the open segment is closed and its in-memory indexes are flushed to persistent storage, all its replicas are guaranteed to be consistent, with the commit protocolarrow-up-right.

    hashtag
    Indexing

    hashtag
    How to set inverted indexes?

    Inverted indexes are set in the tableConfig's tableIndexConfig -> invertedIndexColumns list. For documentation on table config, see Table Config Reference. For an example showing how to configure an inverted index, see Inverted Index.

    Applying inverted indexes to a table config will generate an inverted index for all new segments. To apply the inverted indexes to all existing segments, see How to apply an inverted index to existing segments?

    hashtag
    How to apply an inverted index to existing segments?

    1. Add the columns you wish to index to the tableIndexConfig-> invertedIndexColumns list. To update the table config use the Pinot Swagger API: http://localhost:9000/help#!/Table/updateTableConfigarrow-up-right

    2. Invoke the reload API: http://localhost:9000/help#!/Segment/reloadAllSegmentsarrow-up-right

    Once you've done that, you can check whether the index has been applied by querying the segment metadata API at http://localhost:9000/help#/Segment/getServerMetadataarrow-up-right. Don't forget to include the names of the column on which you have applied the index.

    The output from this API should look something like the following:

    hashtag
    Can I retrospectively add an index to any segment?

    Not all indexes can be retrospectively applied to existing segments.

    If you want to add or change the sorted index column or adjust the dictionary encoding of the default forward index you will need to manually re-load any existing segments.

    hashtag
    How to create star-tree indexes?

    Star-tree indexes are configured in the table config under the tableIndexConfig -> starTreeIndexConfigs (list) and enableDefaultStarTree (boolean). Read more about how to configure star-tree indexes: https://docs.pinot.apache.org/basics/indexing/star-tree-index#index-generationarrow-up-right

    The new segments will have star-tree indexes generated after applying the star-tree index configs to the table config. Currently, Pinot does not support adding star-tree indexes to the existing segments.

    hashtag
    Handling time in Pinot

    hashtag
    How does Pinot’s real-time ingestion handle out-of-order events?

    Pinot does not require ordering of event time stamps. Out of order events are still consumed and indexed into the "currently consuming" segment. In a pathological case, if you have a 2 day old event come in "now", it will still be stored in the segment that is open for consumption "now". There is no strict time-based partitioning for segments, but star-indexes and hybrid tables will handle this as appropriate.

    See the Components > Brokerarrow-up-right for more details about how hybrid tables handle this. Specifically, the time-boundary is computed as max(OfflineTIme) - 1 unit of granularity. Pinot does store the min-max time for each segment and uses it for pruning segments, so segments with multiple time intervals may not be perfectly pruned.

    When generating star-indexes, the time column will be part of the star-tree so the tree can still be efficiently queried for segments with multiple time intervals.

    hashtag
    What is the purpose of a hybrid table not using max(OfflineTime) to determine the time-boundary, and instead using an offset?

    This lets you have an old event up come in without building complex offline pipelines that perfectly partition your events by event timestamps. With this offset, even if your offline data pipeline produces segments with a maximum timestamp, Pinot will not use the offline dataset for that last chunk of segments. The expectation is if you process offline the next time-range of data, your data pipeline will include any late events.

    hashtag
    Why are segments not strictly time-partitioned?

    It might seem odd that segments are not strictly time-partitioned, unlike similar systems such as Apache Druid. This allows real-time ingestion to consume out-of-order events. Even though segments are not strictly time-partitioned, Pinot will still index, prune, and query segments intelligently by time-intervals to for performance of hybrid tables and time-filtered data.

    When generating offline segments, the segments generated such that segments only contain one time-interval and are well partitioned by the time column.

    "tableIndexConfig": {
          ..
          "segmentPartitionConfig": {
            "columnPartitionMap": {
              "column_foo": {
                "functionName": "Murmur",
                "numPartitions": 12 // same as number of kafka partitions
              }
            }
          }
    "routing": {
          "segmentPrunerTypes": ["partition"]
        }
        {
          "dataType": "STRING",
          "maxLength": 1000,
          "name": "textDim1"
        },
    {
      "<segment-name>": {
        "segmentName": "<segment-name>",
        "indexes": {
          "<columnName>": {
            "bloom-filter": "NO",
            "dictionary": "YES",
            "forward-index": "YES",
            "inverted-index": "YES",
            "null-value-vector-reader": "NO",
            "range-index": "NO",
            "json-index": "NO"
          }
        }
      }
    }
    Download the latest Kafkaarrow-up-right. Create a topic
    docker run \
        --network pinot-demo_default --name=kafka \
        -e KAFKA_ZOOKEEPER_CONNECT=manual-zookeeper:2181/kafka \
        -e KAFKA_BROKER_ID=0 \
        -e KAFKA_ADVERTISED_HOST_NAME=kafka \
        -d wurstmeister/kafka:latest
    docker exec \
      -t kafka \
      /opt/kafka/bin/kafka-topics.sh \
      --zookeeper manual-zookeeper:2181/kafka \
      --partitions=1 --replication-factor=1 \
      --create --topic transcript-topic
    bin/pinot-admin.sh  StartKafka -zkAddress=localhost:2123/kafka -port 9876
    Pluggable Streams
    Batch upload sample data
    Creating a schema
    Batch upload sample data
    Table
    Query Console arrow-up-right
    bin/kafka-topics.sh --create --bootstrap-server localhost:9876 --replication-factor 1 --partitions 1 --topic transcript-topic

    Operations FAQ

    hashtag
    Operations

    hashtag
    How much heap should I allocate for my Pinot instances?

    Typically, Pinot components try to use as much off-heap (MMAP/DirectMemory) where ever possible. For example, Pinot servers load segments in memory-mapped files in MMAP mode (recommended), or direct memory in HEAP mode. Heap memory is used mostly for query execution and storing some metadata. We have seen production deployments with high throughput and low-latency work well with just 16 GB of heap for Pinot servers and brokers. Pinot controller may also cache some metadata (table configs etc) in heap, so if there are just a few tables in the Pinot cluster, a few GB of heap should suffice.

    hashtag
    Does Pinot provide any backup/restore mechanism?

    Pinot relies on deep-storage for storing backup copy of segments (offline as well as realtime). It relies on Zookeeper to store metadata (table configs, schema, cluster state, etc). It does not explicitly provide tools to take backups or restore these data, but relies on the deep-storage (ADLS/S3/GCP/etc), and ZK to persist these data/metadata.

    hashtag
    Can I change a column name in my table, without losing data?

    Changing a column name or data type is considered backward incompatible change. While Pinot does support schema evolution for backward compatible changes, it does not support backward incompatible changes like changing name/data-type of a column.

    hashtag
    How to change number of replicas of a table?

    You can change the number of replicas by updating the table config's section. Make sure you have at least as many servers as the replication.

    For OFFLINE table, update

    For REALTIME table update

    After changing the replication, run a .

    hashtag
    How to run a rebalance on a table?

    Refer to .

    hashtag
    How to control number of segments generated?

    The number of segments generated depends on the number of input files. If you provide only 1 input file, you will get 1 segment. If you break up the input file into multiple files, you will get as many segments as the input files.

    hashtag
    What are the common reasons my segment is in a BAD state ?

    This typically happens when the server is unable to load the segment. Possible causes: Out-Of-Memory, no-disk space, unable to download segment from deep-store, and similar other errors. Please check server logs for more information.

    hashtag
    How to reset a segment when it runs into a BAD state?

    Use the segment reset controller REST API to reset the segment:

    hashtag
    What's the difference to Reset, Refresh, or Reload a segment?

    RESET: this gets a segment in ERROR state back to ONLINE or CONSUMING state. Behind the scenes, Pinot controller takes the segment to OFFLINE state, waits for External View to stabilize, and then moves it back to ONLINE/CONSUMING state, thus effectively resetting segments or consumers in error states.

    REFRESH: this replaces the segment with a new one, with the same name but often different data. Under the hood, Pinot controller sets new segment metadata in Zookeeper, and notifies brokers and servers to check their local states about this segment and update accordingly. Servers also download the new segment to replace the old one, when both have different checksums. There is no separate rest API for refreshing, and it is done as part of SegmentUpload API today.

    RELOAD: this reloads the segment, often to generate a new index as updated in table config. Underlying, Pinot server gets the new table config from Zookeeper, and uses it to guide the segment reloading. In fact, the last step of REFRESH as explained above is to load the segment into memory to serve queries. There is a dedicated rest API for reloading. By default, it doesn't download segment. But option is provided to force server to download segment to replace the local one cleanly.

    In addition, RESET brings the segment OFFLINE temporarily; while REFRESH and RELOAD swap the segment on server atomically without bringing down the segment or affecting ongoing queries.

    hashtag
    How can I make brokers/servers join the cluster without the DefaultTenant tag?

    Set this property in your controller.conf file

    Now your brokers and servers should join the cluster as broker_untagged and server_untagged . You can then directly use the POST /tenants API to create the desired tenants

    hashtag
    How to I manually run a Periodic Task

    Refer to

    hashtag
    Tuning and Optimizations

    hashtag
    Do replica groups work for real-time?

    Yes, replica groups work for realtime. There's 2 parts to enabling replica groups:

    1. Replica groups segment assignment

    2. Replica group query routing

    Replica group segment assignment

    Replica group segment assignment is achieved in realtime, if number of servers is a multiple of number of replicas. The partitions get uniformly sprayed across the servers, creating replica groups. For example, consider we have 6 partitions, 2 replicas, and 4 servers.

    r1
    r2

    As you can see, the set (S0, S2) contains r1 of every partition, and (s1, S3) contains r2 of every partition. The query will only be routed to one of the sets, and not span every server. If you are are adding/removing servers from an existing table setup, you have to run for segment assignment changes to take effect.

    Replica group query routing

    Once replica group segment assignment is in effect, the query routing can take advantage of it. For replica group based query routing, set the following in the table config's section, and then restart brokers

    hashtag

    Quick Start Examples

    This section describes quick start commands that launch all Pinot components in a single process.

    Pinot ships with QuickStart commands that launch Pinot components in a single process and import pre-built datasets. These QuickStarts are a good place if you're just getting started with Pinot.

    circle-info

    Prerequisites

    You will need to have or .

    /tmp/pinot-quick-start/transcript-table-realtime.json
    {
      "tableName": "transcript",
      "tableType": "REALTIME",
      "segmentsConfig": {
        "timeColumnName": "timestampInEpoch",
        "timeType": "MILLISECONDS",
        "schemaName": "transcript",
        "replicasPerPartition": "1"
      },
      "tenants": {},
      "tableIndexConfig": {
        "loadMode": "MMAP",
        "streamConfigs": {
          "streamType": "kafka",
          "stream.kafka.consumer.type": "lowlevel",
          "stream.kafka.topic.name": "transcript-topic",
          "stream.kafka.decoder.class.name": "org.apache.pinot.plugin.stream.kafka.KafkaJSONMessageDecoder",
          "stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",
          "stream.kafka.broker.list": "kafka:9092",
          "realtime.segment.flush.threshold.rows": "0",
          "realtime.segment.flush.threshold.time": "24h",
          "realtime.segment.flush.threshold.segment.size": "50M",
          "stream.kafka.consumer.prop.auto.offset.reset": "smallest"
        }
      },
      "metadata": {
        "customConfigs": {}
      }
    }
    docker run \
        --network=pinot-demo_default \
        -v /tmp/pinot-quick-start:/tmp/pinot-quick-start \
        --name pinot-streaming-table-creation \
        apachepinot/pinot:latest AddTable \
        -schemaFile /tmp/pinot-quick-start/transcript-schema.json \
        -tableConfigFile /tmp/pinot-quick-start/transcript-table-realtime.json \
        -controllerHost manual-pinot-controller \
        -controllerPort 9000 \
        -exec
    bin/pinot-admin.sh AddTable \
        -schemaFile /tmp/pinot-quick-start/transcript-schema.json \
        -tableConfigFile /tmp/pinot-quick-start/transcript-table-realtime.json \
        -exec
    /tmp/pinot-quick-start/rawData/transcript.json
    {"studentID":205,"firstName":"Natalie","lastName":"Jones","gender":"Female","subject":"Maths","score":3.8,"timestampInEpoch":1571900400000}
    {"studentID":205,"firstName":"Natalie","lastName":"Jones","gender":"Female","subject":"History","score":3.5,"timestampInEpoch":1571900400000}
    {"studentID":207,"firstName":"Bob","lastName":"Lewis","gender":"Male","subject":"Maths","score":3.2,"timestampInEpoch":1571900400000}
    {"studentID":207,"firstName":"Bob","lastName":"Lewis","gender":"Male","subject":"Chemistry","score":3.6,"timestampInEpoch":1572418800000}
    {"studentID":209,"firstName":"Jane","lastName":"Doe","gender":"Female","subject":"Geography","score":3.8,"timestampInEpoch":1572505200000}
    {"studentID":209,"firstName":"Jane","lastName":"Doe","gender":"Female","subject":"English","score":3.5,"timestampInEpoch":1572505200000}
    {"studentID":209,"firstName":"Jane","lastName":"Doe","gender":"Female","subject":"Maths","score":3.2,"timestampInEpoch":1572678000000}
    {"studentID":209,"firstName":"Jane","lastName":"Doe","gender":"Female","subject":"Physics","score":3.6,"timestampInEpoch":1572678000000}
    {"studentID":211,"firstName":"John","lastName":"Doe","gender":"Male","subject":"Maths","score":3.8,"timestampInEpoch":1572678000000}
    {"studentID":211,"firstName":"John","lastName":"Doe","gender":"Male","subject":"English","score":3.5,"timestampInEpoch":1572678000000}
    {"studentID":211,"firstName":"John","lastName":"Doe","gender":"Male","subject":"History","score":3.2,"timestampInEpoch":1572854400000}
    {"studentID":212,"firstName":"Nick","lastName":"Young","gender":"Male","subject":"History","score":3.6,"timestampInEpoch":1572854400000}
    bin/kafka-console-producer.sh \
        --broker-list localhost:9876 \
        --topic transcript-topic < /tmp/pinot-quick-start/rawData/transcript.json
    circle-exclamation

    macOS Monterey Users

    By default the Airplay receiver server runs on port 7000, which is also the port used by the Pinot Server in the Quick Start. You may see the following error when running these examples:

    If you disable the Airplay receiver server and try again, you shouldn't see this error message anymore.

    hashtag
    Batch

    This example demonstrates how to do batch processing with Pinot. The command:

    • Starts Apache Zookeeper, Pinot Controller, Pinot Broker, and Pinot Server.

    • Creates the baseballStats table

    • Launches a standalone data ingestion job that builds one segment for a given CSV data file for the baseballStats table and pushes the segment to the Pinot Controller.

    • Issues sample queries to Pinot

    hashtag
    Batch JSON

    This example demonstrates how to import and query JSON documents in Pinot. The command:

    • Starts Apache Zookeeper, Pinot Controller, Pinot Broker, and Pinot Server.

    • Creates the githubEvents table

    • Launches a standalone data ingestion job that builds one segment for a given JSON data file for the githubEvents table and pushes the segment to the Pinot Controller.

    • Issues sample queries to Pinot

    hashtag
    Batch with complex data types

    This example demonstrates how to do batch processing in Pinot where the the data items have complex fields that need to be unnested. The command:

    • Starts Apache Zookeeper, Pinot Controller, Pinot Broker, and Pinot Server.

    • Creates the githubEvents table

    • Launches a standalone data ingestion job that builds one segment for a given JSON data file for the githubEvents table and pushes the segment to the Pinot Controller.

    • Issues sample queries to Pinot

    hashtag
    Streaming

    This example demonstrates how to do stream processing with Pinot. The command:

    • Starts Apache Kafka, Apache Zookeeper, Pinot Controller, Pinot Broker, and Pinot Server.

    • Creates meetupRsvp table

    • Launches a meetup stream

    • Publishes data to a Kafka topic meetupRSVPEvents that is subscribed to by Pinot.

    • Issues sample queries to Pinot

    hashtag
    Streaming JSON

    This example demonstrates how to do stream processing with JSON documents in Pinot. The command:

    • Starts Apache Kafka, Apache Zookeeper, Pinot Controller, Pinot Broker, and Pinot Server.

    • Creates meetupRsvp table

    • Launches a meetup stream

    • Publishes data to a Kafka topic meetupRSVPEvents that is subscribed to by Pinot

    • Issues sample queries to Pinot

    hashtag
    Streaming with minion cleanup

    This example demonstrates how to do stream processing in Pinot with RealtimeToOfflineSegmentsTask and MergeRollupTask minion tasks continuously optimizing segments as data gets ingested. The command:

    • Starts Apache Kafka, Apache Zookeeper, Pinot Controller, Pinot Broker, Pinot Minion, and Pinot Server.

    • Creates githubEvents table

    • Launches a GitHub events stream

    • Publishes data to a Kafka topic githubEvents that is subscribed to by Pinot.

    • Issues sample queries to Pinot

    hashtag
    Streaming with complex data types

    This example demonstrates how to do stream processing in Pinot where the stream contains items that have complex fields that need to be unnested. The command:

    • Starts Apache Kafka, Apache Zookeeper, Pinot Controller, Pinot Broker, Pinot Minion, and Pinot Server.

    • Creates meetupRsvp table

    • Launches a meetup stream

    • Publishes data to a Kafka topic meetupRSVPEvents that is subscribed to by Pinot.

    • Issues sample queries to Pinot

    hashtag
    Upsert

    This example demonstrates how to do stream processing with upsert with Pinot. The command:

    • Starts Apache Kafka, Apache Zookeeper, Pinot Controller, Pinot Broker, and Pinot Server.

    • Creates meetupRsvp table

    • Launches a meetup stream

    • Publishes data to a Kafka topic meetupRSVPEvents that is subscribed to by Pinot

    • Issues sample queries to Pinot

    hashtag
    Upsert JSON

    This example demonstrates how to do stream processing with upsert with JSON documents in Pinot. The command:

    • Starts Apache Kafka, Apache Zookeeper, Pinot Controller, Pinot Broker, and Pinot Server.

    • Creates meetupRsvp table

    • Launches a meetup stream

    • Publishes data to a Kafka topic meetupRSVPEvents that is subscribed to by Pinot

    • Issues sample queries to Pinot

    hashtag
    Hybrid

    This example demonstrates how to do hybrid stream and batch processing with Pinot. The command:

    1. Starts Apache Kafka, Apache Zookeeper, Pinot Controller, Pinot Broker, and Pinot Server.

    2. Creates airlineStats table

    3. Launches a standalone data ingestion job that builds segments under a given directory of Avro files for the airlineStats table and pushes the segments to the Pinot Controller.

    4. Launches a stream of flights stats

    5. Publishes data to a Kafka topic airlineStatsEvents that is subscribed to by Pinot.

    6. Issues sample queries to Pinot

    hashtag
    Join

    This example demonstrates how to do joins in Pinot using the Lookup UDF. The command:

    • Starts Apache Zookeeper, Pinot Controller, Pinot Broker, and Pinot Server in the same container.

    • Creates the baseballStats table

    • Launches a data ingestion job that builds one segment for a given CSV data file for the baseballStats table and pushes the segment to the Pinot Controller.

    • Creates the dimBaseballTeams table

    • Launches a data ingestion job that builds one segment for a given CSV data file for the dimBaseballStats table and pushes the segment to the Pinot Controller.

    • Issues sample queries to Pinot

    installed Pinot locally
    have Docker installed if you want to use the Pinot Docker image

    S3

    p5

    S0

    S1

    p6

    S2

    S3

    p1

    S0

    S1

    p2

    S2

    S3

    p3

    S0

    S1

    p4

    segmentsConfigarrow-up-right
    replicationarrow-up-right
    replicasPerPartitionarrow-up-right
    table rebalance
    Rebalance
    Running a Periodic Task Manually
    rebalance
    routingarrow-up-right

    S2

    Failed to start a Pinot [SERVER]
    java.lang.RuntimeException: java.net.BindException: Address already in use
    	at org.apache.pinot.core.transport.QueryServer.start(QueryServer.java:103) ~[pinot-all-0.9.0-jar-with-dependencies.jar:0.9.0-cf8b84e8b0d6ab62374048de586ce7da21132906]
    	at org.apache.pinot.server.starter.ServerInstance.start(ServerInstance.java:158) ~[pinot-all-0.9.0-jar-with-dependencies.jar:0.9.0-cf8b84e8b0d6ab62374048de586ce7da21132906]
    	at org.apache.helix.manager.zk.ParticipantManager.handleNewSession(ParticipantManager.java:110) ~[pinot-all-0.9.0-jar-with-dependencies.jar:0.9.0-cf8b84e8b0d6ab62374048de586ce7da2113
    docker run \
        -p 9000:9000 \
        apachepinot/pinot:0.9.3 QuickStart \
        -type batch
    ./bin/pinot-admin.sh QuickStart -type batch
    docker run \
        -p 9000:9000 \
        apachepinot/pinot:0.9.3 QuickStart \
        -type batch_json_index
    ./bin/pinot-admin.sh QuickStart -type batch_json_index
    docker run \
        -p 9000:9000 \
        apachepinot/pinot:0.9.3 QuickStart \
        -type batch_json_index
    ./bin/pinot-admin.sh QuickStart -type batch_json_index
    docker run \
        -p 9000:9000 \
        apachepinot/pinot:0.9.3 QuickStart \
        -type stream
    ./bin/pinot-admin.sh QuickStart -type stream
    docker run \
        -p 9000:9000 \
        apachepinot/pinot:0.9.3 QuickStart \
        -type stream_json_index
    ./bin/pinot-admin.sh QuickStart -type stream_json_index
    docker run \
        -p 9000:9000 \
        apachepinot/pinot:0.9.3 QuickStart \
        -type realtime_minion
    ./bin/pinot-admin.sh QuickStart -type realtime_minion
    docker run \
        -p 9000:9000 \
        apachepinot/pinot:0.9.3 QuickStart \
        -type stream_complex_type
    ./bin/pinot-admin.sh QuickStart -type stream_complex_type
    docker run \
        -p 9000:9000 \
        apachepinot/pinot:0.9.3 QuickStart \
        -type upsert
    ./bin/pinot-admin.sh QuickStart -type upsert
    docker run \
        -p 9000:9000 \
        apachepinot/pinot:0.9.3 QuickStart \
        -type upsert_json_index
    ./bin/pinot-admin.sh QuickStart -type upsert_json_index
    docker run \
        -p 9000:9000 \
        apachepinot/pinot:0.9.3 QuickStart \
        -type hybrid
    ./bin/pinot-admin.sh QuickStart -type hybrid
    docker run \
        -p 9000:9000 \
        apachepinot/pinot:0.9.3 QuickStart \
        -type join
    ./bin/pinot-admin.sh QuickStart -type join
    { 
        "tableName": "pinotTable", 
        "tableType": "OFFLINE", 
        "segmentsConfig": {
          "replication": "3", 
          ... 
        }
        ..
    { 
        "tableName": "pinotTable", 
        "tableType": "REALTIME", 
        "segmentsConfig": {
          "replicasPerPartition": "3", 
          ... 
        }
        ..
    curl -X POST "{host}/segments/{tableNameWithType}/{segmentName}/reset"
    cluster.tenant.isolation.enable=false
    curl -X POST "http://localhost:9000/tenants" 
    -H "accept: application/json" 
    -H "Content-Type: application/json" 
    -d "{\"tenantRole\":\"BROKER\",\"tenantName\":\"foo\",\"numberOfInstances\":1}"
    {
        "tableName": "pinotTable", 
        "tableType": "REALTIME",
        "routing": {
            "instanceSelectorType": "replicaGroup"
        }
        ..
    }

    Batch import example

    Step-by-step guide on pushing your own data into the Pinot cluster

    So far, we have set up our cluster, ran some queries, and explored the admin endpoints. Now, it's time to get our own data into Pinot. The rest of the instructions assume you're using Pinot in Dockerarrow-up-right.

    hashtag
    Preparing your data

    Let's gather our data files and put them in pinot-quick-start/rawdata.

    Supported file formats are CVS, JSON, AVRO, PARQUET, THRIFT, ORC. If you don't have sample data, you can use this sample CSV.

    hashtag
    Creating a schema

    Schema is used to define the columns and data types of the Pinot table. A detailed overview of the schema can be found in .

    Briefly, we categorize our columns into 3 types

    For example, in our sample table, the playerID, yearID, teamID, league, playerName columns are the dimensions, the playerStint, numberOfgames, numberOfGamesAsBatter, AtBatting, runs, hits, doules, triples, homeRuns, runsBattedIn, stolenBases, caughtStealing, baseOnBalls, strikeouts, intentionalWalks, hitsByPitch, sacrificeHits, sacrificeFlies, groundedIntoDoublePlays, G_old columns are the metrics and there is no time column.

    Once you have identified the dimensions, metrics and time columns, create a schema for your data, using the reference below.

    hashtag
    Creating a table config

    A table config is used to define the config related to the Pinot table. A detailed overview of the table can be found in .

    Here's the table config for the sample CSV file. You can use this as a reference to build your own table config. Simply edit the tableName and schemaName.

    hashtag
    Uploading your table config and schema

    Check the directory structure so far

    Upload the table config using the following command

    Check out the table config and schema in the to make sure it was successfully uploaded.

    hashtag
    Creating a segment

    A Pinot table's data is stored as Pinot segments. A detailed overview of the segment can be found in .

    To generate a segment, we need to first create a job spec yaml file. JobSpec yaml file has all the information regarding data format, input data location and pinot cluster coordinates. You can just copy over this job spec file. If you're using your own data, be sure to 1) replace transcript with your table name 2) set the right recordReaderSpec

    Use the following command to generate a segment and upload it

    Sample output

    Check that your segment made it to the table using the

    hashtag
    Querying your data

    You're all set! You should see your table in the and be able to run queries against it now.

    mkdir -p /tmp/pinot-quick-start/rawdata

    Column Type

    Description

    Dimensions

    Typically used in filters and group by, for slicing and dicing into data

    Metrics

    Typically used in aggregations, represents the quantitative data

    Time

    Optional column, represents the timestamp associated with each row

    Schema
    Table
    Rest APIarrow-up-right
    Segment
    Rest APIarrow-up-right
    Query Consolearrow-up-right
    /tmp/pinot-quick-start/rawdata/transcript.csv
    studentID,firstName,lastName,gender,subject,score,timestampInEpoch
    200,Lucy,Smith,Female,Maths,3.8,1570863600000
    200,Lucy,Smith,Female,English,3.5,1571036400000
    201,Bob,King,Male,Maths,3.2,1571900400000
    202,Nick,Young,Male,Physics,3.6,1572418800000
    /tmp/pinot-quick-start/transcript-schema.json
    {
      "schemaName": "transcript",
      "dimensionFieldSpecs": [
        {
          "name": "studentID",
          "dataType": "INT"
        },
        {
          "name": "firstName",
          "dataType": "STRING"
        },
        {
          "name": "lastName",
          "dataType": "STRING"
        },
        {
          "name": "gender",
          "dataType": "STRING"
        },
        {
          "name": "subject",
          "dataType": "STRING"
        }
      ],
      "metricFieldSpecs": [
        {
          "name": "score",
          "dataType": "FLOAT"
        }
      ],
      "dateTimeFieldSpecs": [{
        "name": "timestampInEpoch",
        "dataType": "LONG",
        "format" : "1:MILLISECONDS:EPOCH",
        "granularity": "1:MILLISECONDS"
      }]
    }
    /tmp/pinot-quick-start/transcript-table-offline.json
    {
      "tableName": "transcript",
      "segmentsConfig" : {
        "timeColumnName": "timestampInEpoch",
        "timeType": "MILLISECONDS",
        "replication" : "1",
        "schemaName" : "transcript"
      },
      "tableIndexConfig" : {
        "invertedIndexColumns" : [],
        "loadMode"  : "MMAP"
      },
      "tenants" : {
        "broker":"DefaultTenant",
        "server":"DefaultTenant"
      },
      "tableType":"OFFLINE",
      "metadata": {}
    }
    $ ls /tmp/pinot-quick-start
    rawdata			transcript-schema.json	transcript-table-offline.json
    
    $ ls /tmp/pinot-quick-start/rawdata 
    transcript.csv
    docker run --rm -ti \
        --network=pinot-demo_default \
        -v /tmp/pinot-quick-start:/tmp/pinot-quick-start \
        --name pinot-batch-table-creation \
        apachepinot/pinot:latest AddTable \
        -schemaFile /tmp/pinot-quick-start/transcript-schema.json \
        -tableConfigFile /tmp/pinot-quick-start/transcript-table-offline.json \
        -controllerHost manual-pinot-controller \
        -controllerPort 9000 -exec
    bin/pinot-admin.sh AddTable \
      -tableConfigFile /tmp/pinot-quick-start/transcript-table-offline.json \
      -schemaFile /tmp/pinot-quick-start/transcript-schema.json -exec
    /tmp/pinot-quick-start/docker-job-spec.yml
    executionFrameworkSpec:
      name: 'standalone'
      segmentGenerationJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner'
      segmentTarPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentTarPushJobRunner'
      segmentUriPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentUriPushJobRunner'
    jobType: SegmentCreationAndTarPush
    inputDirURI: '/tmp/pinot-quick-start/rawdata/'
    includeFileNamePattern: 'glob:**/*.csv'
    outputDirURI: '/tmp/pinot-quick-start/segments/'
    overwriteOutput: true
    pinotFSSpecs:
      - scheme: file
        className: org.apache.pinot.spi.filesystem.LocalPinotFS
    recordReaderSpec:
      dataFormat: 'csv'
      className: 'org.apache.pinot.plugin.inputformat.csv.CSVRecordReader'
      configClassName: 'org.apache.pinot.plugin.inputformat.csv.CSVRecordReaderConfig'
    tableSpec:
      tableName: 'transcript'
      schemaURI: 'http://manual-pinot-controller:9000/tables/transcript/schema'
      tableConfigURI: 'http://manual-pinot-controller:9000/tables/transcript'
    pinotClusterSpecs:
      - controllerURI: 'http://manual-pinot-controller:9000'
    /tmp/pinot-quick-start/batch-job-spec.yml
    executionFrameworkSpec:
      name: 'standalone'
      segmentGenerationJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner'
      segmentTarPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentTarPushJobRunner'
      segmentUriPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentUriPushJobRunner'
    jobType: SegmentCreationAndTarPush
    inputDirURI: '/tmp/pinot-quick-start/rawdata/'
    includeFileNamePattern: 'glob:**/*.csv'
    outputDirURI: '/tmp/pinot-quick-start/segments/'
    overwriteOutput: true
    pinotFSSpecs:
      - scheme: file
        className: org.apache.pinot.spi.filesystem.LocalPinotFS
    recordReaderSpec:
      dataFormat: 'csv'
      className: 'org.apache.pinot.plugin.inputformat.csv.CSVRecordReader'
      configClassName: 'org.apache.pinot.plugin.inputformat.csv.CSVRecordReaderConfig'
    tableSpec:
      tableName: 'transcript'
      schemaURI: 'http://localhost:9000/tables/transcript/schema'
      tableConfigURI: 'http://localhost:9000/tables/transcript'
    pinotClusterSpecs:
      - controllerURI: 'http://localhost:9000'
    docker run --rm -ti \
        --network=pinot-demo_default \
        -v /tmp/pinot-quick-start:/tmp/pinot-quick-start \
        --name pinot-data-ingestion-job \
        apachepinot/pinot:latest LaunchDataIngestionJob \
        -jobSpecFile /tmp/pinot-quick-start/docker-job-spec.yml
    bin/pinot-admin.sh LaunchDataIngestionJob \
        -jobSpecFile /tmp/pinot-quick-start/batch-job-spec.yml
    SegmentGenerationJobSpec: 
    !!org.apache.pinot.spi.ingestion.batch.spec.SegmentGenerationJobSpec
    excludeFileNamePattern: null
    executionFrameworkSpec: {extraConfigs: null, name: standalone, segmentGenerationJobRunnerClassName: org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner,
      segmentTarPushJobRunnerClassName: org.apache.pinot.plugin.ingestion.batch.standalone.SegmentTarPushJobRunner,
      segmentUriPushJobRunnerClassName: org.apache.pinot.plugin.ingestion.batch.standalone.SegmentUriPushJobRunner}
    includeFileNamePattern: glob:**\/*.csv
    inputDirURI: /tmp/pinot-quick-start/rawdata/
    jobType: SegmentCreationAndTarPush
    outputDirURI: /tmp/pinot-quick-start/segments
    overwriteOutput: true
    pinotClusterSpecs:
    - {controllerURI: 'http://localhost:9000'}
    pinotFSSpecs:
    - {className: org.apache.pinot.spi.filesystem.LocalPinotFS, configs: null, scheme: file}
    pushJobSpec: null
    recordReaderSpec: {className: org.apache.pinot.plugin.inputformat.csv.CSVRecordReader,
      configClassName: org.apache.pinot.plugin.inputformat.csv.CSVRecordReaderConfig,
      configs: null, dataFormat: csv}
    segmentNameGeneratorSpec: null
    tableSpec: {schemaURI: 'http://localhost:9000/tables/transcript/schema', tableConfigURI: 'http://localhost:9000/tables/transcript',
      tableName: transcript}
    
    Trying to create instance for class org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner
    Initializing PinotFS for scheme file, classname org.apache.pinot.spi.filesystem.LocalPinotFS
    Finished building StatsCollector!
    Collected stats for 4 documents
    Using fixed bytes value dictionary for column: studentID, size: 9
    Created dictionary for STRING column: studentID with cardinality: 3, max length in bytes: 3, range: 200 to 202
    Using fixed bytes value dictionary for column: firstName, size: 12
    Created dictionary for STRING column: firstName with cardinality: 3, max length in bytes: 4, range: Bob to Nick
    Using fixed bytes value dictionary for column: lastName, size: 15
    Created dictionary for STRING column: lastName with cardinality: 3, max length in bytes: 5, range: King to Young
    Created dictionary for FLOAT column: score with cardinality: 4, range: 3.2 to 3.8
    Using fixed bytes value dictionary for column: gender, size: 12
    Created dictionary for STRING column: gender with cardinality: 2, max length in bytes: 6, range: Female to Male
    Using fixed bytes value dictionary for column: subject, size: 21
    Created dictionary for STRING column: subject with cardinality: 3, max length in bytes: 7, range: English to Physics
    Created dictionary for LONG column: timestampInEpoch with cardinality: 4, range: 1570863600000 to 1572418800000
    Start building IndexCreator!
    Finished records indexing in IndexCreator!
    Finished segment seal!
    Converting segment: /var/folders/3z/qn6k60qs6ps1bb6s2c26gx040000gn/T/pinot-1583443148720/output/transcript_OFFLINE_1570863600000_1572418800000_0 to v3 format
    v3 segment location for segment: transcript_OFFLINE_1570863600000_1572418800000_0 is /var/folders/3z/qn6k60qs6ps1bb6s2c26gx040000gn/T/pinot-1583443148720/output/transcript_OFFLINE_1570863600000_1572418800000_0/v3
    Deleting files in v1 segment directory: /var/folders/3z/qn6k60qs6ps1bb6s2c26gx040000gn/T/pinot-1583443148720/output/transcript_OFFLINE_1570863600000_1572418800000_0
    Starting building 1 star-trees with configs: [StarTreeV2BuilderConfig[splitOrder=[studentID, firstName],skipStarNodeCreation=[],functionColumnPairs=[org.apache.pinot.core.startree.v2.AggregationFunctionColumnPair@3a48efdc],maxLeafRecords=1]] using OFF_HEAP builder
    Starting building star-tree with config: StarTreeV2BuilderConfig[splitOrder=[studentID, firstName],skipStarNodeCreation=[],functionColumnPairs=[org.apache.pinot.core.startree.v2.AggregationFunctionColumnPair@3a48efdc],maxLeafRecords=1]
    Generated 3 star-tree records from 4 segment records
    Finished constructing star-tree, got 9 tree nodes and 4 records under star-node
    Finished creating aggregated documents, got 6 aggregated records
    Finished building star-tree in 10ms
    Finished building 1 star-trees in 27ms
    Computed crc = 3454627653, based on files [/var/folders/3z/qn6k60qs6ps1bb6s2c26gx040000gn/T/pinot-1583443148720/output/transcript_OFFLINE_1570863600000_1572418800000_0/v3/columns.psf, /var/folders/3z/qn6k60qs6ps1bb6s2c26gx040000gn/T/pinot-1583443148720/output/transcript_OFFLINE_1570863600000_1572418800000_0/v3/index_map, /var/folders/3z/qn6k60qs6ps1bb6s2c26gx040000gn/T/pinot-1583443148720/output/transcript_OFFLINE_1570863600000_1572418800000_0/v3/metadata.properties, /var/folders/3z/qn6k60qs6ps1bb6s2c26gx040000gn/T/pinot-1583443148720/output/transcript_OFFLINE_1570863600000_1572418800000_0/v3/star_tree_index, /var/folders/3z/qn6k60qs6ps1bb6s2c26gx040000gn/T/pinot-1583443148720/output/transcript_OFFLINE_1570863600000_1572418800000_0/v3/star_tree_index_map]
    Driver, record read time : 0
    Driver, stats collector time : 0
    Driver, indexing time : 0
    Tarring segment from: /var/folders/3z/qn6k60qs6ps1bb6s2c26gx040000gn/T/pinot-1583443148720/output/transcript_OFFLINE_1570863600000_1572418800000_0 to: /var/folders/3z/qn6k60qs6ps1bb6s2c26gx040000gn/T/pinot-1583443148720/output/transcript_OFFLINE_1570863600000_1572418800000_0.tar.gz
    Size for segment: transcript_OFFLINE_1570863600000_1572418800000_0, uncompressed: 6.73KB, compressed: 1.89KB
    Trying to create instance for class org.apache.pinot.plugin.ingestion.batch.standalone.SegmentTarPushJobRunner
    Initializing PinotFS for scheme file, classname org.apache.pinot.spi.filesystem.LocalPinotFS
    Start pushing segments: [/tmp/pinot-quick-start/segments/transcript_OFFLINE_1570863600000_1572418800000_0.tar.gz]... to locations: [org.apache.pinot.spi.ingestion.batch.spec.PinotClusterSpec@243c4f91] for table transcript
    Pushing segment: transcript_OFFLINE_1570863600000_1572418800000_0 to location: http://localhost:9000 for table transcript
    Sending request: http://localhost:9000/v2/segments?tableName=transcript to controller: nehas-mbp.hsd1.ca.comcast.net, version: Unknown
    Response for pushing table transcript segment transcript_OFFLINE_1570863600000_1572418800000_0 to location http://localhost:9000 - 200: {"status":"Successfully uploaded segment: transcript_OFFLINE_1570863600000_1572418800000_0 of table: transcript"}

    Running in Kubernetes

    Pinot quick start in Kubernetes

    hashtag
    1. Prerequisites

    circle-info

    This quickstart assumes that you already have a running Kubernetes cluster. Please follow the links below to set up a Kubernetes cluster.

    • (make sure to run with enough resources e.g. minikube start --vm=true --cpus=4 --memory=8g --disk-size=50g)

    hashtag
    2. Setting up a Pinot cluster in Kubernetes

    Before continuing, please make sure that you've downloaded Apache Pinot. The scripts for the setup in this guide can be found in our open source project on GitHub.

    The scripts can be found in the Pinot source at ./pinot/kubernetes/helm

    hashtag
    2.1 Start Pinot with Helm

    Pinot repo has pre-packaged HelmCharts for Pinot and Presto. Helm Repo index file is .

    NOTE: Please specify StorageClass based on your cloud vendor. For Pinot Server, please don't mount blob store like AzureFile/GoogleCloudStorage/S3 as the data serving file system.

    Only use Amazon EBS/GCP Persistent Disk/Azure Disk style disks.

    • For AWS: "gp2"

    hashtag
    2.2 Check Pinot deployment status

    hashtag
    3. Load data into Pinot using Kafka

    hashtag
    3.1 Bring up a Kafka cluster for real-time data ingestion

    hashtag
    3.2 Check Kafka deployment status

    Ensure the Kafka deployment is ready before executing the scripts in the following next steps.

    hashtag
    3.3 Create Kafka topics

    The scripts below will create two Kafka topics for data ingestion:

    hashtag
    3.4 Load data into Kafka and create Pinot schema/tables

    The script below will deploy 3 batch jobs.

    • Ingest 19492 JSON messages to Kafka topic flights-realtime at a speed of 1 msg/sec

    • Ingest 19492 Avro messages to Kafka topic flights-realtime-avro at a speed of 1 msg/sec

    • Upload Pinot schema airlineStats

    hashtag
    4. Query using Pinot Data Explorer

    hashtag
    4.1 Pinot Data Explorer

    Please use the script below to perform local port-forwarding, which will also open Pinot query console in your default web browser.

    This script can be found in the Pinot source at ./pinot/kubernetes/helm/pinot

    hashtag
    5. Using Superset to query Pinot

    hashtag
    5.1 Bring up Superset using helm

    Install SuperSet Helm Repo

    Get Helm values config file:

    Edit /tmp/superset-values.yaml file and add pinotdb pip dependency into bootstrapScript field, so Superset will install pinot dependencies during bootstrap time.

    You can also build your own image with this dependency or just use image: apachepinot/pinot-superset:latest instead.

    Also remember to change the admin credential inside the init section with meaningful user profile and stronger password.

    Install Superset using helm

    Ensure your cluster is up by running:

    hashtag
    5.2 Access Superset UI

    You can run the below command to port forward superset to your localhost:18088. Then you can navigate superset in your browser with the previous set admin credential.

    Create Pinot Database using URI:

    pinot+http://pinot-broker.pinot-quickstart:8099/query?controller=http://pinot-controller.pinot-quickstart:9000/

    Once the database is added, you can add more data sets and explore the dashboarding.

    hashtag
    6. Access Pinot using Trino

    hashtag
    6.1 Deploy Trino

    You can run the command below to deploy Trino with the Pinot plugin installed.

    The above command adds Trino HelmChart repo. You can then run the below command to see the charts.

    In order to connect Trino to Pinot, we need to add Pinot catalog, which requires extra configurations. You can run the below command to get all the configurable values.

    To add Pinot catalog, you can edit the additionalCatalogs section by adding:

    circle-info

    Pinot is deployed at namespace pinot-quickstart, so the controller serviceURL is pinot-controller.pinot-quickstart:9000

    After modifying the /tmp/trino-values.yaml file, you can deploy Trino with:

    Once you deployed the Trino, You can check Trino deployment status by:

    hashtag
    6.2 Query Trino using Trino CLI

    Once Trino is deployed, you can run the below command to get a runnable Trino CLI.

    hashtag
    6.2.1 Download Trino CLI

    6.2.2 Port forward Trino service to your local if it's not already exposed

    6.2.3 Use Trino console client to connect to Trino service

    6.2.4 Query Pinot data using Trino CLI

    hashtag
    6.3 Sample queries to execute

    • List all catalogs

    • List All tables

    • Show schema

    • Count total documents

    hashtag
    7. Access Pinot using Presto

    hashtag
    7.1 Deploy Presto using Pinot plugin

    You can run the command below to deploy a customized Presto with the Pinot plugin installed.

    The above command deploys Presto with default configs. For customizing your deployment, you can run the below command to get all the configurable values.

    After modifying the /tmp/presto-values.yaml file, you can deploy Presto with:

    Once you deployed the Presto, You can check Presto deployment status by:

    hashtag
    7.2 Query Presto using Presto CLI

    Once Presto is deployed, you can run the below command from , or just follow steps 6.2.1 to 6.2.3.

    hashtag
    6.2.1 Download Presto CLI

    6.2.2 Port forward presto-coordinator port 8080 to localhost port 18080

    hashtag
    6.2.3 Start Presto CLI with pinot catalog to query it then query it

    6.2.4 Query Pinot data using Presto CLI

    hashtag
    7.3 Sample queries to execute

    • List all catalogs

    • List All tables

    • Show schema

    • Count total documents

    hashtag
    8. Deleting the Pinot cluster in Kubernetes

    Setup a Kubernetes Cluster using Google Kubernetes Engine (GKE)

  • Setup a Kubernetes Cluster using Azure Kubernetes Service (AKS)

  • For GCP: "pd-ssd" or "standard"

  • For Azure: "AzureDisk"

  • For Docker-Desktop: "hostpath"

  • hashtag
    2.1.1 Update helm dependency

    helm dependency update

    hashtag
    2.1.2 Start Pinot with Helm

    • For Helm v2.12.1

    If your Kubernetes cluster is recently provisioned, ensure Helm is initialized by running:

    Then deploy a new HA Pinot cluster using the following command:

    • For Helm v3.0.0

    hashtag
    2.1.3 Troubleshooting (For helm v2.12.1)

    • Error: Please run the below command if encountering the following issue:

    • Resolution:

    • Error: Please run the command below if encountering a permission issue:

    Error: release pinot failed: namespaces "pinot-quickstart" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "pinot-quickstart"

    • Resolution:

  • Create Pinot table airlineStats to ingest data from JSON encoded Kafka topic flights-realtime

  • Create Pinot table airlineStatsAvro to ingest data from Avro encoded Kafka topic flights-realtime-avro

  • helm repo add pinot https://raw.githubusercontent.com/apache/pinot/master/kubernetes/helm
    kubectl create ns pinot-quickstart
    helm install pinot pinot/pinot \
        -n pinot-quickstart \
        --set cluster.name=pinot \
        --set server.replicaCount=2
    Enable Kubernetes on Docker-Desktoparrow-up-right
    Install Minikube for local setuparrow-up-right
    Setup a Kubernetes Cluster using Amazon Elastic Kubernetes Service (Amazon EKS)
    herearrow-up-right
    herearrow-up-right
    Sample Output of K8s Deployment Status
    # checkout pinot
    git clone https://github.com/apache/pinot.git
    cd pinot/kubernetes/helm
    kubectl get all -n pinot-quickstart
    helm repo add incubator https://charts.helm.sh/incubator
    helm install -n pinot-quickstart kafka incubator/kafka --set replicas=1,zookeeper.image.tag=latest
    helm repo add incubator https://charts.helm.sh/incubator
    helm install --namespace "pinot-quickstart"  --name kafka incubator/kafka --set zookeeper.image.tag=latest 
    kubectl get all -n pinot-quickstart | grep kafka
    pod/kafka-0                                                 1/1     Running     0          2m
    pod/kafka-zookeeper-0                                       1/1     Running     0          10m
    pod/kafka-zookeeper-1                                       1/1     Running     0          9m
    pod/kafka-zookeeper-2                                       1/1     Running     0          8m
    kubectl -n pinot-quickstart exec kafka-0 -- kafka-topics --zookeeper kafka-zookeeper:2181 --topic flights-realtime --create --partitions 1 --replication-factor 1
    kubectl -n pinot-quickstart exec kafka-0 -- kafka-topics --zookeeper kafka-zookeeper:2181 --topic flights-realtime-avro --create --partitions 1 --replication-factor 1
    kubectl apply -f pinot/pinot-realtime-quickstart.yml
    ./query-pinot-data.sh
    helm repo add superset https://apache.github.io/superset
    helm inspect values superset/superset > /tmp/superset-values.yaml
    kubectl create ns superset
    helm upgrade --install --values /tmp/superset-values.yaml superset superset/superset -n superset
    kubectl get all -n superset
    kubectl port-forward service/superset 18088:8088 -n superset
    helm repo add trino https://trinodb.github.io/charts/
    helm search repo trino
    helm inspect values trino/trino > /tmp/trino-values.yaml
    additionalCatalogs:
      pinot: |
        connector.name=pinot
        pinot.controller-urls=pinot-controller.pinot-quickstart:9000
    kubectl create ns trino-quickstart
    helm install my-trino trino/trino --version 0.2.0 -n trino-quickstart --values /tmp/trino-values.yaml
    kubectl get pods -n trino-quickstart
    curl -L https://repo1.maven.org/maven2/io/trino/trino-cli/363/trino-cli-363-executable.jar -o /tmp/trino && chmod +x /tmp/trino
    echo "Visit http://127.0.0.1:18080 to use your application"
    kubectl port-forward service/my-trino 18080:8080 -n trino-quickstart
    /tmp/trino --server localhost:18080 --catalog pinot --schema default
    trino:default> show catalogs;
      Catalog
    ---------
     pinot
     system
     tpcds
     tpch
    (4 rows)
    
    Query 20211025_010256_00002_mxcvx, FINISHED, 2 nodes
    Splits: 36 total, 36 done (100.00%)
    0.70 [0 rows, 0B] [0 rows/s, 0B/s]
    trino:default> show tables;
        Table
    --------------
     airlinestats
    (1 row)
    
    Query 20211025_010326_00003_mxcvx, FINISHED, 3 nodes
    Splits: 36 total, 36 done (100.00%)
    0.28 [1 rows, 29B] [3 rows/s, 104B/s]
    trino:default> DESCRIBE airlinestats;
            Column        |      Type      | Extra | Comment
    ----------------------+----------------+-------+---------
     flightnum            | integer        |       |
     origin               | varchar        |       |
     quarter              | integer        |       |
     lateaircraftdelay    | integer        |       |
     divactualelapsedtime | integer        |       |
     divwheelsons         | array(integer) |       |
     divwheelsoffs        | array(integer) |       |
    ......
    
    Query 20211025_010414_00006_mxcvx, FINISHED, 3 nodes
    Splits: 36 total, 36 done (100.00%)
    0.37 [79 rows, 5.96KB] [212 rows/s, 16KB/s]
    trino:default> select count(*) as cnt from airlinestats limit 10;
     cnt
    ------
     9746
    (1 row)
    
    Query 20211025_015607_00009_mxcvx, FINISHED, 2 nodes
    Splits: 17 total, 17 done (100.00%)
    0.24 [1 rows, 9B] [4 rows/s, 38B/s]
    helm install presto pinot/presto -n pinot-quickstart
    kubectl apply -f presto-coordinator.yaml
    helm inspect values pinot/presto > /tmp/presto-values.yaml
    helm install presto pinot/presto -n pinot-quickstart --values /tmp/presto-values.yaml
    kubectl get pods -n pinot-quickstart
    ./pinot-presto-cli.sh
    curl -L https://repo1.maven.org/maven2/com/facebook/presto/presto-cli/0.246/presto-cli-0.246-executable.jar -o /tmp/presto-cli && chmod +x /tmp/presto-cli
    kubectl port-forward service/presto-coordinator 18080:8080 -n pinot-quickstart> /dev/null &
    /tmp/presto-cli --server localhost:18080 --catalog pinot --schema default
    presto:default> show catalogs;
     Catalog
    ---------
     pinot
     system
    (2 rows)
    
    Query 20191112_050827_00003_xkm4g, FINISHED, 1 node
    Splits: 19 total, 19 done (100.00%)
    0:01 [0 rows, 0B] [0 rows/s, 0B/s]
    presto:default> show tables;
        Table
    --------------
     airlinestats
    (1 row)
    
    Query 20191112_050907_00004_xkm4g, FINISHED, 1 node
    Splits: 19 total, 19 done (100.00%)
    0:01 [1 rows, 29B] [1 rows/s, 41B/s]
    presto:default> DESCRIBE pinot.dontcare.airlinestats;
            Column        |  Type   | Extra | Comment
    ----------------------+---------+-------+---------
     flightnum            | integer |       |
     origin               | varchar |       |
     quarter              | integer |       |
     lateaircraftdelay    | integer |       |
     divactualelapsedtime | integer |       |
    ......
    
    Query 20191112_051021_00005_xkm4g, FINISHED, 1 node
    Splits: 19 total, 19 done (100.00%)
    0:02 [80 rows, 6.06KB] [35 rows/s, 2.66KB/s]
    presto:default> select count(*) as cnt from pinot.dontcare.airlinestats limit 10;
     cnt
    ------
     9745
    (1 row)
    
    Query 20191112_051114_00006_xkm4g, FINISHED, 1 node
    Splits: 17 total, 17 done (100.00%)
    0:00 [1 rows, 8B] [2 rows/s, 19B/s]
    kubectl delete ns pinot-quickstart
    helm init --service-account tiller
    helm install --namespace "pinot-quickstart" --name "pinot" pinot
    kubectl create ns pinot-quickstart
    helm install -n pinot-quickstart pinot pinot
    Error: could not find tiller.
    kubectl -n kube-system delete deployment tiller-deploy
    kubectl -n kube-system delete service/tiller-deploy
    helm init --service-account tiller
    kubectl apply -f helm-rbac.yaml