Use S3 as Deep Storage for Pinot

Below commands are based on pinot distribution binary.

Setup Pinot Cluster

In order to set up Pinot to use S3 as deep store, we need to put extra configs for Controller and Server.

Start Controller

Below is a sample controller.conf file.

Configure controller.data.dirto your s3 bucket. All the uploaded segments will be stored there.

And add s3 as a pinot storage with configs:

pinot.controller.storage.factory.class.s3=org.apache.pinot.plugin.filesystem.S3PinotFS
pinot.controller.storage.factory.s3.region=us-west-2

Regarding AWS Credential, we also follow the convention of DefaultAWSCredentialsProviderChain.

You can specify AccessKey and Secret using:

  • Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (RECOMMENDED since they are recognized by all the AWS SDKs and CLI except for .NET), or AWS_ACCESS_KEY and AWS_SECRET_KEY (only recognized by Java SDK)

  • Java System Properties - aws.accessKeyId and aws.secretKey

  • Credential profiles file at the default location (~/.aws/credentials) shared by all AWS SDKs and the AWS CLI

  • Configure AWS credential in pinot config files, e.g. set pinot.controller.storage.factory.s3.accessKey and pinot.controller.storage.factory.s3.secretKey in the config file. (Not recommended)

pinot.controller.storage.factory.s3.accessKey=****************LFVX
pinot.controller.storage.factory.s3.secretKey=****************gfhz

Add s3 to pinot.controller.segment.fetcher.protocols

and set pinot.controller.segment.fetcher.s3.class toorg.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher

If you to grant full control to bucket owner, then add this to the config:

Then start pinot controller with:

Start Broker

Broker is a simple one you can just start it with default:

Start Server

Below is a sample server.conf file

Similar to controller config, also set s3 configs in pinot server.

If you to grant full control to bucket owner, then add this to the config:

Then start pinot controller with:

Setup Table

In this demo, we just use airlineStats table as an example.

Create table with below command:

Set up Ingestion Jobs

Standalone Job

Below is a sample standalone ingestion job spec with certain notable changes:

  • jobType is SegmentCreationAndUriPush

  • inputDirURI is set to a s3 location s3://my.bucket/batch/airlineStats/rawdata/

  • outputDirURI is set to a s3 location s3://my.bucket/output/airlineStats/segments

  • Add a new PinotFs under pinotFSSpecs

  • For library version < 0.6.0, set segmentUriPrefix to [scheme]://[bucket.name], e.g. s3://my.bucket , from version 0.6.0, you can put empty string or just ignore segmentUriPrefix.

Sample ingestionJobSpec.yaml

Below is a sample job output:

Spark Job

Set up Spark Cluster (Skip if you already have one)

Follow this page to setup a local spark cluster.

Submit Spark Job

Below is a sample Spark Ingestion job

Submit spark job with the ingestion job:

Sample Results/Snapshots

Below is the sample snapshot of s3 location for controller:

Sample S3 Controller Storage

Below is a sample download URI in PropertyStore. We expect the segment download URI to start with s3://

Sample segment download URI in PropertyStore

Was this helpful?