Controller

Most of the properties in Pinot are set in Zookeeper via Helix. However, you can also set properties in controller.conf file. You can specify the path to configuration file at the startup time as follows:

bin/pinot-admin.sh StartController -configFileName /path/to/controller.conf

Controller.conf can have the following properties.

Primary configuration

Property
Default
Description

controller.vip.host

same as controller.host

The VIP hostname used to set the download URL for segments

controller.vip.port

same as controller.port

controller.vip.protocol

controller.host

localhost

The ip of the host on which controller is running

controller.port

9000

The port on which controller should run

controller.access.protocol

controller.data.dir

${java.io.tmpdir}/PinotController

Directory to host segment data

controller.local.temp.dir

controller.zk.str

localhost:2181

zookeeper host:port string to connect

controller.update_segment_state_model

false

controller.helix.cluster.name

Pinot Cluster Name, required.

controller.tenant.isolation.enable

true

Enable Tenant Isolation, default is single tenant cluste

controller.enable.split.commit

false

controller.query.console.useHttps

false

use https instead of http for cluster

controller.upload.onlineToOfflineTimeout

2 minutes

controller.mode

dual

Should be one of helix_only, pinot_only or dual

controller.resource.rebalance.strategy

org.apache.helix.controller. rebalancer.strategy.AutoRebalanceStrategy

controller.resource.rebalance.delay_ms

5 minutes

How long the helix waits to rebalance partitions after a controller is lost.

Longer time values allow for more time for a controller to recover.

Shorter time values prevent cluster operations like segment sealing from being blocked.

controller.realtime.segment.commit.timeoutSeconds

120 seconds

request timeout for segment commit

controller.deleted.segments.retentionInDays

7 days

duration for which to retain deleted segments

controller.admin.access.control.factory.class

org.apache.pinot.controller. api.access.AllowAllAccessFactory

controller.segment.upload.timeoutInMillis

10 minutes

timeout for upload of segments.

controller.realtime.segment.metadata.commit.numLocks

64

controller.enable.storage.quota.check

true

controller.enable.batch.message.mode

false

controller.allow.hlc.tables

true

controller.storage.factory.class.file

org.apache.pinot.spi. filesystem.LocalPinotFS

table.minReplicas

1

controller.access.protocols

http

Ingress protocols to access controller (http or https or http,https)

controller.access.protocols.http.port

Port to access controller via http

controller.broker.protocols.https.port

Port to access controller via https

controller.broker.protocol

http

protocol for forwarding query requests (http or https)

controller.broker.port.override

override for broker port when forwarding query requests (use in multi-ingress scenarios)

controller.tls.keystore.path

Path to controller TLS keystore

controller.tls.keystore.password

keystore password

controller.tls.truststore.path

Path to controller TLS truststore

controller.tls.truststore.password

truststore password

controller.tls.client.auth

false

toggle for requiring TLS client auth

pinot.controller.http.server.thread.pool.corePoolSize

2 * cores

Config for the thread-pool used by pinot-controller's http-server.

pinot.controller.http.server.thread.pool.maxPoolSize

2 * cores

Config for the thread-pool used by pinot-controller's http-server.

pinot.controller.segment.fetcher.http.client.maxConnTotal

Config for the http-client used by HttpSegmentFetcher for downloading segments

pinot.controller.segment.fetcher.http.client.maxConnPerRoute

Config for the http-client used by HttpSegmentFetcher for downloading segments

Periodic task configuration

The following period tasks are

BrokerResourceValidationManager

This task rebuilds the BrokerResource if the instance set has changed.

Config
Default Value

controller.broker.resource.validation.frequencyPeriod

1h

controller.broker.resource.validation.initialDelayInSeconds

between 2m-5m

StaleInstancesCleanupTask

This task periodically cleans up stale Pinot broker/server/minion instances.

Config
Default Value

controller.stale.instances.cleanup.task.frequencyPeriod

1h

controller.stale.instances.cleanup.task.initialDelaySeconds

between 2m-5m

controller.stale.instances.cleanup.task.minOfflineTimeBeforeDeletionPeriod

1h

OfflineSegmentIntervalChecker

This task manages the segment ValidationMetrics (missingSegmentCount, offlineSegmentDelayHours, lastPushTimeDelayHours, TotalDocumentCount, NonConsumingPartitionCount, SegmentCount), to ensure that all offline segments are contiguous (no missing segments) and that the offline push delay isn't too high.

Config
Default Value

controller.offline.segment.interval.checker.frequencyPeriod

24h

controller.statuschecker.waitForPushTimePeriod

10m

controller.offlineSegmentIntervalChecker.initialDelayInSeconds

between 2m-5m

PinotTaskManager

TBD

RealtimeSegmentValidationManager

This task validates the ideal state and segment zk metadata of real-time tables by doing the following:

  • fixing any partitions which have stopped consuming

  • starting consumption from new partitions

  • uploading segments to deep store if segment download url is missing

This task ensures that the consumption of the real-time tables gets fixed and keeps going when met with erroneous conditions.

This task does not fix consumption stalled due to

  • CONSUMING segment being deleted

  • Kafka OOR exceptions

Config
Default Value

controller.realtime.segment.validation.frequencyPeriod

1h

controller.realtime.segment.validation.initialDelayInSeconds

between 2m-5m

controller.realtime.segment.deepStoreUploadRetryEnabled

false

controller.realtime.segment.deepStoreUploadRetry.timeoutMs

-1

controller.realtime.segment.deepStoreUploadRetry.parallelism

1

RetentionManager

This task manages retention of segments for all tables. During the run, it looks at the retentionTimeUnit and retentionTimeValue inside the segmentsConfig of every table, and deletes segments which are older than the retention. The deleted segments are moved to a DeletedSegments folder colocated with the dataDir on segment store, and permanently deleted from that folder in a configurable number of days.

Config
Default Value

controller.retention.frequencyPeriod

6h

controller.retentionManager.initialDelayInSeconds

between 2m-5m

controller.deleted.segments.retentionInDays

7d

SegmentRelocator

This task is applicable only if you have tierConfig or tagOverrideConfig. It runs rebalance in the background to

  1. relocate COMPLETED segments to tag overrides

  2. relocate ONLINE segments to tiers if tier configs are set

At most one replica is allowed to be unavailable during rebalance.

Config
Default Value

controller.segment.relocator.frequencyPeriod

1h

controller.segmentRelocator.initialDelayInSeconds

between 2m-5m

SegmentStatusChecker

This task manages segment status metrics such as realtimeTableCount, offlineTableCount, disableTableCount, numberOfReplicas, percentOfReplicas, percentOfSegments, idealStateZnodeSize, idealStateZnodeByteSize, segmentCount, segmentsInErrorState, tableCompressedSize.

Config
Default Value

controller.statuschecker.frequencyPeriod

5m

controller.statusChecker.initialDelayInSeconds

between 2m-5m

RebalanceChecker

Currently, table rebalance triggered by user runs at best effort. It could fail if the controller running it got restarted; or some servers were not stable, making the rebalance timed out while waiting for external view to converge with ideal state, etc. This task checks for failed rebalance and retry them automatically, up to certain times as configured.

Config
Default Value

controller.rebalance.checker.frequencyPeriod

5m

controller.rebalanceChecker.initialDelayInSeconds

between 2m-5m

TaskMetricsEmitter

TBD

Last updated