Use S3 and Pinot in Docker

Setup Pinot Cluster

In order to setup Pinot in Docker to use S3 as deep store, we need to put extra configs for Controller and Server.

Create a docker network

docker network create -d bridge pinot-demo

Start Zookeeper

docker run \
    --name zookeeper \
    --restart always \
    --network=pinot-demo \
    -d zookeeper:3.5.6

Prepare Pinot configuration files

Below sections will prepare 3 config files under /tmp/pinot-s3-docker to mount to the container.

/tmp/pinot-s3-docker/
                     controller.conf
                     server.conf
                     ingestionJobSpec.yaml

Start Controller

Below is a sample controller.conf file.

Please config:

controller.data.dirto your s3 bucket. All the uploaded segments will be stored there.

And add s3 as a pinot storage with configs:

Regarding AWS Credential, we also follow the convention of DefaultAWSCredentialsProviderChain.

You can specify AccessKey and Secret using:

  • Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (RECOMMENDED since they are recognized by all the AWS SDKs and CLI except for .NET), or AWS_ACCESS_KEY and AWS_SECRET_KEY (only recognized by Java SDK)

  • Java System Properties - aws.accessKeyId and aws.secretKey

  • Credential profiles file at the default location (~/.aws/credentials) shared by all AWS SDKs and the AWS CLI

  • Configure AWS credential in pinot config files, e.g. set pinot.controller.storage.factory.s3.accessKey and pinot.controller.storage.factory.s3.secretKey in the config file. (Not recommended)

Add s3 to pinot.controller.segment.fetcher.protocols

and set pinot.controller.segment.fetcher.s3.class toorg.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher

Then start pinot controller with:

Start Broker

Broker is a simple one you can just start it with default:

Start Server

Below is a sample server.conf file

Similar to controller config, please also set s3 configs in pinot server.

Then start pinot server with:

Setup Table

In this demo, we just use airlineStats table as an example which is already packaged inside the docker image.

You can also mount your table conf and schema files to the container and run it.

Set up Ingestion Jobs

Standalone Job

Below is a sample standalone ingestion job spec with certain notable changes:

  • jobType is SegmentCreationAndMetadataPush (this job will bypass controller download segment )

  • inputDirURI is set to a s3 location s3://my.bucket/batch/airlineStats/rawdata/

  • outputDirURI is set to a s3 location s3://my.bucket/output/airlineStats/segments

  • Add a new PinotFs under pinotFSSpecs

Sample ingestionJobSpec.yaml

Launch the data ingestion job:

Last updated

Was this helpful?