Hadoop

Segment Creation and Push

Pinot supports Apache Hadoop as a processor to create and push segment files to the database. Pinot distribution is bundled with the Spark code to process your files and convert and upload them to Pinot.
You can follow the [wiki] to build pinot distribution from source. The resulting JAR file can be found in pinot/target/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar
Next, you need to change the execution config in the job spec to the following -
1
# executionFrameworkSpec: Defines ingestion jobs to be running.
2
executionFrameworkSpec:
3
4
# name: execution framework name
5
name: 'hadoop'
6
7
# segmentGenerationJobRunnerClassName: class name implements org.apache.pinot.spi.ingestion.batch.runner.IngestionJobRunner interface.
8
segmentGenerationJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.hadoop.HadoopSegmentGenerationJobRunner'
9
10
# segmentTarPushJobRunnerClassName: class name implements org.apache.pinot.spi.ingestion.batch.runner.IngestionJobRunner interface.
11
segmentTarPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.hadoop.HadoopSegmentTarPushJobRunner'
12
13
# segmentUriPushJobRunnerClassName: class name implements org.apache.pinot.spi.ingestion.batch.runner.IngestionJobRunner interface.
14
segmentUriPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.hadoop.HadoopSegmentUriPushJobRunner'
15
16
# segmentMetadataPushJobRunnerClassName: class name implements org.apache.pinot.spi.ingestion.batch.runner.IngestionJobRunner interface.
17
segmentMetadataPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.hadoop.HadoopSegmentMetadataPushJobRunner'
18
19
# extraConfigs: extra configs for execution framework.
20
extraConfigs:
21
22
# stagingDir is used in distributed filesystem to host all the segments then move this directory entirely to output directory.
23
stagingDir: your/local/dir/staging
Copied!
You can check out the sample job spec here.
Finally execute the hadoop job using the command -
1
export PINOT_VERSION=0.8.0
2
export PINOT_DISTRIBUTION_DIR=${PINOT_ROOT_DIR}/pinot-distribution/target/apache-pinot-${PINOT_VERSION}-bin/apache-pinot-${PINOT_VERSION}-bin
3
export HADOOP_CLIENT_OPTS="-Dplugins.dir=${PINOT_DISTRIBUTION_DIR}/plugins -Dlog4j2.configurationFile=${PINOT_DISTRIBUTION_DIR}/conf/pinot-ingestion-job-log4j2.xml"
4
5
hadoop jar \\
6
${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar \\
7
org.apache.pinot.tools.admin.command.LaunchDataIngestionJobCommand \\
8
-jobSpecFile ${PINOT_DISTRIBUTION_DIR}/examples/batch/airlineStats/hadoopIngestionJobSpec.yaml
Copied!
Please ensure environment variables PINOT_ROOT_DIR and PINOT_VERSION are set properly.

Data Preprocessing before Segment Creation

We’ve seen some requests that data should be massaged (like partitioning, sorting, resizing) before creating and pushing segments to Pinot.
The MapReduce job called SegmentPreprocessingJob would be the best fit for this use case, regardless of whether the input data is of AVRO or ORC format.
Check the below example to see how to use SegmentPreprocessingJob.
In Hadoop properties, set the following to enable this job:
1
enable.preprocessing = true
2
preprocess.path.to.output = <output_path>
Copied!
In table config, specify the operations in preprocessing.operations that you'd like to enable in the MR job, and then specify the exact configs regarding those operations:
1
{
2
"OFFLINE": {
3
"metadata": {
4
"customConfigs": {
5
“preprocessing.operations”: “resize, partition, sort”, // To enable the following preprocessing operations
6
"preprocessing.max.num.records.per.file": "100", // To enable resizing
7
"preprocessing.num.reducers": "3" // To enable resizing
8
}
9
},
10
...
11
"tableIndexConfig": {
12
"aggregateMetrics": false,
13
"autoGeneratedInvertedIndex": false,
14
"bloomFilterColumns": [],
15
"createInvertedIndexDuringSegmentGeneration": false,
16
"invertedIndexColumns": [],
17
"loadMode": "MMAP",
18
"nullHandlingEnabled": false,
19
"segmentPartitionConfig": { // To enable partitioning
20
"columnPartitionMap": {
21
"item": {
22
"functionName": "murmur",
23
"numPartitions": 4
24
}
25
}
26
},
27
"sortedColumn": [ // To enable sorting
28
"actorId"
29
],
30
"streamConfigs": {}
31
},
32
"tableName": "tableName_OFFLINE",
33
"tableType": "OFFLINE",
34
"tenants": {
35
...
36
}
37
}
38
}
Copied!

preprocessing.num.reducers

Minimum number of reducers. Optional. Fetched when partitioning gets disabled and resizing is enabled. This parameter is to avoid having too many small input files for Pinot, which leads to the case where Pinot server is holding too many small segments, causing too many threads.

preprocessing.max.num.records.per.file

Maximum number of records per reducer. Optional.Unlike, “preprocessing.num.reducers”, this parameter is to avoid having too few large input files for Pinot, which misses the advantage of muti-threading when querying. When not set, each reducer will finally generate one output file. When set (e.g. M), the original output file will be split into multiple files and each new output file contains at most M records. It does not matter whether partitioning is enabled or not.
For more details on this MR job, please refer to this document.
Last modified 1mo ago