Use S3 and Pinot in Docker

Setup Pinot Cluster

In order to setup Pinot in Docker to use S3 as deep store, we need to put extra configs for Controller and Server.

Create a docker network

1
docker network create -d bridge pinot-demo
Copied!

Start Zookeeper

1
docker run \
2
--name zookeeper \
3
--restart always \
4
--network=pinot-demo \
5
-d zookeeper:3.5.6
Copied!

Prepare Pinot configuration files

Below sections will prepare 3 config files under /tmp/pinot-s3-docker to mount to the container.
1
/tmp/pinot-s3-docker/
2
controller.conf
3
server.conf
4
ingestionJobSpec.yaml
Copied!

Start Controller

Below is a sample controller.conf file.
Please config:
controller.data.dirto your s3 bucket. All the uploaded segments will be stored there.
And add s3 as a pinot storage with configs:
1
pinot.controller.storage.factory.class.s3=org.apache.pinot.plugin.filesystem.S3PinotFS
2
pinot.controller.storage.factory.s3.region=us-west-2
Copied!
Regarding AWS Credential, we also follow the convention of DefaultAWSCredentialsProviderChain.
You can specify AccessKey and Secret using:
  • Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (RECOMMENDED since they are recognized by all the AWS SDKs and CLI except for .NET), or AWS_ACCESS_KEY and AWS_SECRET_KEY (only recognized by Java SDK)
  • Java System Properties - aws.accessKeyId and aws.secretKey
  • Credential profiles file at the default location (~/.aws/credentials) shared by all AWS SDKs and the AWS CLI
  • Configure AWS credential in pinot config files, e.g. set pinot.controller.storage.factory.s3.accessKey and pinot.controller.storage.factory.s3.secretKey in the config file. (Not recommended)
1
pinot.controller.storage.factory.s3.accessKey=****************LFVX
2
pinot.controller.storage.factory.s3.secretKey=****************gfhz
Copied!
Add s3 to pinot.controller.segment.fetcher.protocols
and set pinot.controller.segment.fetcher.s3.class toorg.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
1
pinot.role=controller
2
pinot.controller.storage.factory.class.s3=org.apache.pinot.plugin.filesystem.S3PinotFS
3
pinot.controller.storage.factory.s3.region=us-west-2
4
controller.data.dir=s3://<my-bucket>/pinot-data/pinot-s3-example-docker/controller-data/
5
controller.local.temp.dir=/tmp/pinot-tmp-data/
6
controller.helix.cluster.name=pinot-s3-example-docker
7
controller.zk.str=zookeeper:2181
8
controller.port=9000
9
controller.enable.split.commit=true
10
pinot.controller.segment.fetcher.protocols=file,http,s3
11
pinot.controller.segment.fetcher.s3.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
Copied!
Then start pinot controller with:
1
docker run --rm -ti \
2
--name pinot-controller \
3
--network=pinot-demo \
4
-p 9000:9000 \
5
--env AWS_ACCESS_KEY_ID=<aws-access-key-id> \
6
--env AWS_SECRET_ACCESS_KEY=<aws-secret-access-key> \
7
--mount type=bind,source=/tmp/pinot-s3-docker,target=/tmp \
8
apachepinot/pinot:0.6.0-SNAPSHOT-ca8545b29-20201105-jdk11 StartController \
9
-configFileName /tmp/controller.conf
Copied!

Start Broker

Broker is a simple one you can just start it with default:
1
docker run --rm -ti \
2
--name pinot-broker \
3
--network=pinot-demo \
4
--env AWS_ACCESS_KEY_ID=<aws-access-key-id> \
5
--env AWS_SECRET_ACCESS_KEY=<aws-secret-access-key> \
6
apachepinot/pinot:0.6.0-SNAPSHOT-ca8545b29-20201105-jdk11 StartBroker \
7
-zkAddress zookeeper:2181 -clusterName pinot-s3-example-docker
Copied!

Start Server

Below is a sample server.conf file
Similar to controller config, please also set s3 configs in pinot server.
1
pinot.server.netty.port=8098
2
pinot.server.adminapi.port=8097
3
pinot.server.instance.dataDir=/tmp/pinot-tmp/server/index
4
pinot.server.instance.segmentTarDir=/tmp/pinot-tmp/server/segmentTars
5
6
7
pinot.server.storage.factory.class.s3=org.apache.pinot.plugin.filesystem.S3PinotFS
8
pinot.server.storage.factory.s3.region=us-west-2
9
pinot.server.segment.fetcher.protocols=file,http,s3
10
pinot.server.segment.fetcher.s3.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
Copied!
Then start pinot server with:
1
docker run --rm -ti \
2
--name pinot-server \
3
--network=pinot-demo \
4
--env AWS_ACCESS_KEY_ID=<aws-access-key-id> \
5
--env AWS_SECRET_ACCESS_KEY=<aws-secret-access-key> \
6
--mount type=bind,source=/tmp/pinot-s3-docker,target=/tmp \
7
apachepinot/pinot:0.6.0-SNAPSHOT-ca8545b29-20201105-jdk11 StartServer \
8
-zkAddress zookeeper:2181 -clusterName pinot-s3-example-docker \
9
-configFileName /tmp/server.conf
Copied!

Setup Table

In this demo, we just use airlineStats table as an example which is already packaged inside the docker image.
You can also mount your table conf and schema files to the container and run it.
1
docker run --rm -ti \
2
--name pinot-ingestion-job \
3
--network=pinot-demo \
4
apachepinot/pinot:0.6.0-SNAPSHOT-ca8545b29-20201105-jdk11 AddTable \
5
-controllerHost pinot-controller \
6
-controllerPort 9000 \
7
-schemaFile examples/batch/airlineStats/airlineStats_schema.json \
8
-tableConfigFile examples/batch/airlineStats/airlineStats_offline_table_config.json \
9
-exec
Copied!

Set up Ingestion Jobs

Standalone Job

Below is a sample standalone ingestion job spec with certain notable changes:
  • jobType is SegmentCreationAndMetadataPush (this job will bypass controller download segment )
  • inputDirURI is set to a s3 location s3://my.bucket/batch/airlineStats/rawdata/
  • outputDirURI is set to a s3 location s3://my.bucket/output/airlineStats/segments
  • Add a new PinotFs under pinotFSSpecs
    1
    - scheme: s3
    2
    className: org.apache.pinot.plugin.filesystem.S3PinotFS
    3
    configs:
    4
    region: 'us-west-2'
    Copied!
Sample ingestionJobSpec.yaml
1
# executionFrameworkSpec: Defines ingestion jobs to be running.
2
executionFrameworkSpec:
3
4
# name: execution framework name
5
name: 'standalone'
6
7
# segmentGenerationJobRunnerClassName: class name implements org.apache.pinot.spi.batch.ingestion.runner.SegmentGenerationJobRunner interface.
8
segmentGenerationJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner'
9
10
# segmentTarPushJobRunnerClassName: class name implements org.apache.pinot.spi.batch.ingestion.runner.SegmentTarPushJobRunner interface.
11
segmentTarPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentTarPushJobRunner'
12
13
# segmentUriPushJobRunnerClassName: class name implements org.apache.pinot.spi.batch.ingestion.runner.SegmentUriPushJobRunner interface.
14
segmentUriPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentUriPushJobRunner'
15
16
# segmentMetadataPushJobRunnerClassName: class name implements org.apache.pinot.spi.batch.ingestion.runner.IngestionJobRunner interface.
17
segmentMetadataPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentMetadataPushJobRunner'
18
19
# jobType: Pinot ingestion job type.
20
# Supported job types are:
21
# 'SegmentCreation'
22
# 'SegmentTarPush'
23
# 'SegmentUriPush'
24
# 'SegmentCreationAndTarPush'
25
# 'SegmentCreationAndUriPush'
26
jobType: SegmentCreationAndMetadataPush
27
28
# inputDirURI: Root directory of input data, expected to have scheme configured in PinotFS.
29
inputDirURI: 's3://<my-bucket>/pinot-data/rawdata/airlineStats/rawdata/'
30
31
# includeFileNamePattern: include file name pattern, supported glob pattern.
32
# Sample usage:
33
# 'glob:*.avro' will include all avro files just under the inputDirURI, not sub directories;
34
# 'glob:**/*.avro' will include all the avro files under inputDirURI recursively.
35
includeFileNamePattern: 'glob:**/*.avro'
36
37
# excludeFileNamePattern: exclude file name pattern, supported glob pattern.
38
# Sample usage:
39
# 'glob:*.avro' will exclude all avro files just under the inputDirURI, not sub directories;
40
# 'glob:**/*.avro' will exclude all the avro files under inputDirURI recursively.
41
# _excludeFileNamePattern: ''
42
43
# outputDirURI: Root directory of output segments, expected to have scheme configured in PinotFS.
44
outputDirURI: 's3://<my-bucket>/pinot-data/pinot-s3-docker/segments/airlineStats'
45
46
# segmentCreationJobParallelism: The parallelism to create egments.
47
segmentCreationJobParallelism: 5
48
49
# overwriteOutput: Overwrite output segments if existed.
50
overwriteOutput: true
51
52
# pinotFSSpecs: defines all related Pinot file systems.
53
pinotFSSpecs:
54
55
- # scheme: used to identify a PinotFS.
56
# E.g. local, hdfs, dbfs, etc
57
scheme: file
58
59
# className: Class name used to create the PinotFS instance.
60
# E.g.
61
# org.apache.pinot.spi.filesystem.LocalPinotFS is used for local filesystem
62
# org.apache.pinot.plugin.filesystem.AzurePinotFS is used for Azure Data Lake
63
# org.apache.pinot.plugin.filesystem.HadoopPinotFS is used for HDFS
64
className: org.apache.pinot.spi.filesystem.LocalPinotFS
65
66
67
- scheme: s3
68
className: org.apache.pinot.plugin.filesystem.S3PinotFS
69
configs:
70
region: 'us-west-2'
71
72
# recordReaderSpec: defines all record reader
73
recordReaderSpec:
74
75
# dataFormat: Record data format, e.g. 'avro', 'parquet', 'orc', 'csv', 'json', 'thrift' etc.
76
dataFormat: 'avro'
77
78
# className: Corresponding RecordReader class name.
79
# E.g.
80
# org.apache.pinot.plugin.inputformat.avro.AvroRecordReader
81
# org.apache.pinot.plugin.inputformat.csv.CSVRecordReader
82
# org.apache.pinot.plugin.inputformat.parquet.ParquetRecordReader
83
# org.apache.pinot.plugin.inputformat.json.JSONRecordReader
84
# org.apache.pinot.plugin.inputformat.orc.ORCRecordReader
85
# org.apache.pinot.plugin.inputformat.thrift.ThriftRecordReader
86
className: 'org.apache.pinot.plugin.inputformat.avro.AvroRecordReader'
87
88
# tableSpec: defines table name and where to fetch corresponding table config and table schema.
89
tableSpec:
90
91
# tableName: Table name
92
tableName: 'airlineStats'
93
94
# schemaURI: defines where to read the table schema, supports PinotFS or HTTP.
95
# E.g.
96
# hdfs://path/to/table_schema.json
97
# http://localhost:9000/tables/myTable/schema
98
schemaURI: 'http://pinot-controller:9000/tables/airlineStats/schema'
99
100
# tableConfigURI: defines where to reade the table config.
101
# Supports using PinotFS or HTTP.
102
# E.g.
103
# hdfs://path/to/table_config.json
104
# http://localhost:9000/tables/myTable
105
# Note that the API to read Pinot table config directly from pinot controller contains a JSON wrapper.
106
# The real table config is the object under the field 'OFFLINE'.
107
tableConfigURI: 'http://pinot-controller:9000/tables/airlineStats'
108
109
# pinotClusterSpecs: defines the Pinot Cluster Access Point.
110
pinotClusterSpecs:
111
- # controllerURI: used to fetch table/schema information and data push.
112
# E.g. http://localhost:9000
113
controllerURI: 'http://pinot-controller:9000'
114
115
# pushJobSpec: defines segment push job related configuration.
116
pushJobSpec:
117
118
# pushAttempts: number of attempts for push job, default is 1, which means no retry.
119
pushAttempts: 2
120
121
# pushRetryIntervalMillis: retry wait Ms, default to 1 second.
122
pushRetryIntervalMillis: 1000
123
Copied!
Launch the data ingestion job:
1
docker run --rm -ti \
2
--name pinot-ingestion-job \
3
--network=pinot-demo \
4
--env AWS_ACCESS_KEY_ID=<aws-access-key-id> \
5
--env AWS_SECRET_ACCESS_KEY=<aws-secret-access-key> \
6
--mount type=bind,source=/tmp/pinot-s3-docker,target=/tmp \
7
apachepinot/pinot:0.6.0-SNAPSHOT-ca8545b29-20201105-jdk11 LaunchDataIngestionJob \
8
-jobSpecFile /tmp/ingestionJobSpec.yaml
Copied!
Last modified 1yr ago