Running in Kubernetes
Pinot quick start in Kubernetes
Get started running Pinot in Kubernetes.
Prerequisites
Kubernetes
This guide assumes that you already have a running Kubernetes cluster.
If you haven't yet set up a Kubernetes cluster, see the links below for instructions:
Install Minikube for local setup
Make sure to run with enough resources:
minikube start --vm=true --cpus=4 --memory=8g --disk-size=50g
Pinot
Make sure that you've downloaded Apache Pinot. The scripts for the setup in this guide can be found in our open source project on GitHub.
# checkout pinot
git clone https://github.com/apache/pinot.git
cd pinot/helm/pinotSet up a Pinot cluster in Kubernetes
Start Pinot with Helm
The Pinot repository has pre-packaged Helm charts for Pinot and Presto. The Helm repository index file is here.
Note: Specify StorageClass based on your cloud vendor. Don't mount a blob store (such as AzureFile, GoogleCloudStorage, or S3) as the data serving file system. Use only Amazon EBS/GCP Persistent Disk/Azure Disk-style disks.
For AWS: "gp2"
For GCP: "pd-ssd" or "standard"
For Azure: "AzureDisk"
For Docker-Desktop: "hostpath"
1.1.1 Update Helm dependency
1.1.2 Start Pinot with Helm
For Helm v2.12.1:
If your Kubernetes cluster is recently provisioned, ensure Helm is initialized by running:
Then deploy a new HA Pinot cluster using the following command:
For Helm v3.0.0:
1.1.3 Troubleshooting (For helm v2.12.1)
If you see the error below:
Run the following:
If you encounter a permission issue, like the following:
Error: release pinot failed: namespaces "pinot-quickstart" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "pinot-quickstart"
Run the command below:
Check Pinot deployment status
Load data into Pinot using Kafka
Bring up a Kafka cluster for real-time data ingestion
Check Kafka deployment status
Ensure the Kafka deployment is ready before executing the scripts in the following steps. Run the following command:
Below is an example output showing the deployment is ready:
Create Kafka topics
Run the scripts below to create two Kafka topics for data ingestion:
Load data into Kafka and create Pinot schema/tables
The script below does the following:
Ingests 19492 JSON messages to Kafka topic
flights-realtimeat a speed of 1 msg/secIngests 19492 Avro messages to Kafka topic
flights-realtime-avroat a speed of 1 msg/secUploads Pinot schema
airlineStatsCreates Pinot table
airlineStatsto ingest data from JSON encoded Kafka topicflights-realtimeCreates Pinot table
airlineStatsAvroto ingest data from Avro encoded Kafka topicflights-realtime-avro
Query with the Pinot Data Explorer
Pinot Data Explorer
The script below, located at ./pinot/helm/pinot, performs local port forwarding, and opens the Pinot query console in your default web browser.
Query Pinot with Superset
Bring up Superset using Helm
Install the SuperSet Helm repository:
Get the Helm values configuration file:
For Superset to install Pinot dependencies, edit
/tmp/superset-values.yamlfile to add apinotdbpip dependency intobootstrapScriptfield.You can also build your own image with this dependency or use the image
apachepinot/pinot-superset:latestinstead.

Replace the default admin credentials inside the
initsection with a meaningful user profile and stronger password.Install Superset using Helm:
Ensure your cluster is up by running:
Access the Superset UI
Run the below command to port forward Superset to your
localhost:18088.
Navigate to Superset in your browser with the admin credentials you set in the previous section.
Create a new database connection with the following URI:
pinot+http://pinot-broker.pinot-quickstart:8099/query?controller=http://pinot-controller.pinot-quickstart:9000/Once the database is added, you can add more data sets and explore the dashboard options.
Access Pinot with Trino
Deploy Trino
Deploy Trino with the Pinot plugin installed:
See the charts in the Trino Helm chart repository:
In order to connect Trino to Pinot, you'll need to add the Pinot catalog, which requires extra configurations. Run the below command to get all the configurable values.
To add the Pinot catalog, edit the
additionalCatalogssection by adding:
After modifying the
/tmp/trino-values.yamlfile, deploy Trino with:
Once you've deployed Trino, check the deployment status:

Query Pinot with the Trino CLI
Once Trino is deployed, run the below command to get a runnable Trino CLI.
Download the Trino CLI:
Port forward Trino service to your local if it's not already exposed:
Use the Trino console client to connect to the Trino service:
Query Pinot data using the Trino CLI, like in the sample queries below.
Sample queries to execute
List all catalogs
List all tables
Show schema
Count total documents
Access Pinot with Presto
Deploy Presto with the Pinot plugin
First, deploy Presto with default configurations:
To customize your deployment, run the below command to get all the configurable values.
After modifying the
/tmp/presto-values.yamlfile, deploy Presto:
Once you've deployed the Presto instance, check the deployment status:
Query Presto using the Presto CLI
Once Presto is deployed, you can run the below command from here, or follow the steps below.
Download the Presto CLI:
Port forward
presto-coordinatorport 8080 tolocalhostport 18080:
Start the Presto CLI with the Pinot catalog:
Query Pinot data with the Presto CLI, like in the sample queries below.
Sample queries to execute
List all catalogs
List all tables
Show schema
Count total documents
Delete a Pinot cluster in Kubernetes
To delete your Pinot cluster in Kubernetes, run the following command:
Last updated
Was this helpful?

