Ingest Parquet Files from S3 Using Spark
One of the primary advantage of using Pinot is its pluggable architecture. The plugins make it easy to add support for any third-party system which can be an execution framework, a filesystem or input format.
In this tutorial, we will use three such plugins to easily ingest data and push it to our pinot cluster. The plugins we will be using are -
  • pinot-batch-ingestion-spark
  • pinot-s3
  • pinot-parquet
You can check out Batch Ingestion, File systems and Input formats for all the available plugins.

Setup

We are using the following tools and frameworks for this tutorial -

Input Data

We need to get input data to ingest first. For our demo, we'll just create some small parquet files and upload them to our S3 bucket. The easiest way is to create CSV files and then convert them to parquet. CSV makes it human-readable and thus easier to modify input in case of some failure in our demo. We will call this file students.csv
1
timestampInEpoch,id,name,age,score
2
1597044264380,1,david,15,98
3
1597044264381,2,henry,16,97
4
1597044264382,3,katie,14,99
5
1597044264383,4,catelyn,15,96
6
1597044264384,5,emma,13,93
7
1597044264390,6,john,15,100
8
1597044264396,7,isabella,13,89
9
1597044264399,8,linda,17,91
10
1597044264502,9,mark,16,67
11
1597044264670,10,tom,14,78
Copied!
Now, we'll create parquet files from the above CSV file using Spark. Since this is a small program, we will be using Spark shell instead of writing a full fledged Spark code.
1
scala> val df = spark.read.format("csv").option("header", true).load("path/to/students.csv")
2
scala> df.write.option("compression","none").mode("overwrite").parquet("/path/to/batch_input/")
Copied!
The .parquet files can now be found in /path/to/batch_input directory. You can now upload this directory to S3 either using their UI or running the command
1
aws s3 cp /path/to/batch_input s3://my-bucket/batch-input/ --recursive
Copied!

Create Schema and Table

We need to create a table to query the data that will be ingested. All tables in pinot are associated with a schema. You can check out Table configuration and Schema configuration for more details on creating configurations.
For our demo, we will have the following schema and table configs
student_schema.json
1
{
2
"schemaName": "students",
3
"dimensionFieldSpecs": [
4
{
5
"name": "id",
6
"dataType": "INT"
7
},
8
{
9
"name": "name",
10
"dataType": "STRING"
11
},
12
{
13
"name": "age",
14
"dataType": "INT"
15
}
16
],
17
"metricFieldSpecs": [
18
{
19
"name": "score",
20
"dataType": "INT"
21
}
22
],
23
"dateTimeFieldSpecs": [
24
{
25
"name": "timestampInEpoch",
26
"dataType": "LONG",
27
"format": "1:MILLISECONDS:EPOCH",
28
"granularity": "1:MILLISECONDS"
29
}
30
]
31
}
Copied!
student_table.json
1
{
2
"tableName": "students",
3
"segmentsConfig": {
4
"timeColumnName": "timestampInEpoch",
5
"timeType": "MILLISECONDS",
6
"replication": "1",
7
"schemaName": "students"
8
},
9
"tableIndexConfig": {
10
"invertedIndexColumns": [],
11
"loadMode": "MMAP"
12
},
13
"tenants": {
14
"broker": "DefaultTenant",
15
"server": "DefaultTenant"
16
},
17
"tableType": "OFFLINE",
18
"metadata": {}
19
}
Copied!
We can now upload these configurations to pinot and create an empty table. We will be using pinot-admin.sh CLI for these purpose.
1
pinot-admin.sh AddTable -tableConfigFile /path/to/student_table.json -schemaFile /path/to/student_schema.json -controllerHost localhost -controllerPort 9000 -exec
Copied!
You can check out Command-Line Interface (CLI) for all the available commands.
Our table will now be available in the Pinot data explorer

Ingest Data

Now that our data is available in S3 as well as we have the Tables in Pinot, we can start the process of ingesting the data. Data ingestion in Pinot involves the following steps -
  • Read data and generate compressed segment files from input
  • Upload the compressed segment files to output location
  • Push the location of the segment files to the controller
Once the location is available to the controller, it can notify the servers to download the segment files and populate the tables.
The above steps can be performed using any distributed executor of your choice such as Hadoop, Spark, Flink etc. For this demo we will be using Apache Spark to execute the steps.
Pinot provides runners for Spark out of the box. So as a user, you don't need to write a single line of code. You can write runners for any other executor using our provided interfaces.
Firstly, we will create a job spec configuration file for our data ingestion process.
spark_job_spec.yaml
1
executionFrameworkSpec:
2
name: 'spark'
3
segmentGenerationJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.spark.SparkSegmentGenerationJobRunner'
4
segmentTarPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.spark.SparkSegmentTarPushJobRunner'
5
segmentUriPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.spark.SparkSegmentUriPushJobRunner'
6
segmentMetadataPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentMetadataPushJobRunner'
7
8
# jobType: Pinot ingestion job type.
9
# Supported job types are:
10
# 'SegmentCreation'
11
# 'SegmentTarPush'
12
# 'SegmentUriPush'
13
# 'SegmentCreationAndTarPush'
14
# 'SegmentCreationAndUriPush'
15
# 'SegmentCreationAndMetadataPush'
16
jobType: SegmentCreationAndMetadataPush
17
extraConfigs:
18
stagingDir: s3://my-bucket/spark/staging/
19
inputDirURI: 's3://my-bucket/path/to/batch-input/'
20
outputDirURI: 's3:///my-bucket/path/to/batch-output/'
21
overwriteOutput: true
22
pinotFSSpecs:
23
- scheme: s3
24
className: org.apache.pinot.plugin.filesystem.S3PinotFS
25
configs:
26
region: 'us-west-2'
27
recordReaderSpec:
28
dataFormat: 'parquet'
29
className: 'org.apache.pinot.plugin.inputformat.parquet.ParquetRecordReader'
30
tableSpec:
31
tableName: 'students'
32
pinotClusterSpecs:
33
- controllerURI: 'http://localhost:9000'
34
pushJobSpec:
35
pushParallelism: 2
36
pushAttempts: 2
37
pushRetryIntervalMillis: 1000
Copied!
In the job spec, we have kept execution framework as spark and configured the appropriate runners for each of our steps. We also need a temporary stagingDir for our spark job. This directory is cleaned up after our job has executed.
We also provide the S3 Filesystem and Parquet reader implementation in the config to use. You can refer Ingestion Job Spec for complete list of configuration.
We can now run our spark job to execute all the steps and populate data in pinot.
1
export PINOT_VERSION=0.8.0
2
export PINOT_DISTRIBUTION_DIR=/path/to/apache-pinot-${PINOT_VERSION}-bin
3
4
spark-submit //
5
--class org.apache.pinot.tools.admin.command.LaunchDataIngestionJobCommand //
6
--master local --deploy-mode client //
7
--conf "spark.driver.extraJavaOptions=-Dplugins.dir=${PINOT_DISTRIBUTION_DIR}/plugins -Dplugins.include=pinot-s3,pinot-parquet -Dlog4j2.configurationFile=${PINOT_DISTRIBUTION_DIR}/conf/pinot-ingestion-job-log4j2.xml" //
8
--conf "spark.driver.extraClassPath=${PINOT_DISTRIBUTION_DIR}/plugins/pinot-batch-ingestion/pinot-batch-ingestion-spark/pinot-batch-ingestion-spark-${PINOT_VERSION}-shaded.jar:${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar:${PINOT_DISTRIBUTION_DIR}/plugins/pinot-file-system/pinot-s3/pinot-s3-${PINOT_VERSION}-shaded.jar:${PINOT_DISTRIBUTION_DIR}/plugins/pinot-input-format/pinot-parquet/pinot-parquet-${PINOT_VERSION}-shaded.jar" //
9
local://${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar -jobSpecFile /path/to/spark_job_spec.yaml
Copied!
In the command , we have included the JARs of all the required plugins in the spark's driver classpath. In practice, you only need to do this if you get a ClassNotFoundException.
Voila! Now our data is successfully ingested. Let's try to query it from Pinot's broker
1
bin/pinot-admin.sh PostQuery -brokerHost localhost -brokerPort 8000 -queryType sql -query "SELECT * FROM students LIMIT 10"
Copied!
If everything went right, you should receive the following output
1
{
2
"resultTable": {
3
"dataSchema": {
4
"columnNames": [
5
"age",
6
"id",
7
"name",
8
"score",
9
"timestampInEpoch"
10
],
11
"columnDataTypes": [
12
"INT",
13
"INT",
14
"STRING",
15
"INT",
16
"LONG"
17
]
18
},
19
"rows": [
20
[
21
15,
22
1,
23
"david",
24
98,
25
1597044264380
26
],
27
[
28
16,
29
2,
30
"henry",
31
97,
32
1597044264381
33
],
34
[
35
14,
36
3,
37
"katie",
38
99,
39
1597044264382
40
],
41
[
42
15,
43
4,
44
"catelyn",
45
96,
46
1597044264383
47
],
48
[
49
13,
50
5,
51
"emma",
52
93,
53
1597044264384
54
],
55
[
56
15,
57
6,
58
"john",
59
100,
60
1597044264390
61
],
62
[
63
13,
64
7,
65
"isabella",
66
89,
67
1597044264396
68
],
69
[
70
17,
71
8,
72
"linda",
73
91,
74
1597044264399
75
],
76
[
77
16,
78
9,
79
"mark",
80
67,
81
1597044264502
82
],
83
[
84
14,
85
10,
86
"tom",
87
78,
88
1597044264670
89
]
90
]
91
},
92
"exceptions": [],
93
"numServersQueried": 1,
94
"numServersResponded": 1,
95
"numSegmentsQueried": 1,
96
"numSegmentsProcessed": 1,
97
"numSegmentsMatched": 1,
98
"numConsumingSegmentsQueried": 0,
99
"numDocsScanned": 10,
100
"numEntriesScannedInFilter": 0,
101
"numEntriesScannedPostFilter": 50,
102
"numGroupsLimitReached": false,
103
"totalDocs": 10,
104
"timeUsedMs": 11,
105
"segmentStatistics": [],
106
"traceInfo": {},
107
"minConsumingFreshnessTimeMs": 0
108
}
Copied!
Last modified 14d ago