arrow-left

All pages
gitbookPowered by GitBook
1 of 4

Loading...

Loading...

Loading...

Loading...

Troubleshooting

Troubleshoot common issues in Apache Pinot.

Troubleshoot issues with the multi-stage query engine (v2)
Troubleshoot issues with ZooKeeper znodes

Realtime Ingestion Stopped

hashtag
Symptons

When observed certain kafka partitioned stopped ingestion due to the segment commit failure.

Sample errrors:

Usually here are the steps that a partition got stopped ingestion:

  1. servers tell controller to commit,

  2. controller ack and ask the lead server to commit

  3. lead server failed to commit due to many reason( segment build time longer than the controller lease, server oom, etc)

  4. the other server got permission to build and try to commit and also failed

  5. you got a partition completely stopped

hashtag
Mitigations

To mitigate, we suggest below steps to ensure your setup is scalable and stable.

  1. Ensure Pinot server directly save segments to deep store, avoid controller in the critical data path. Ref link: . This is the most critical fix as it will remove controller as the bottleneck for data commit:

    1. controller receive segment tarball from pinot server

    2. uncompress it

This will reduce each segment build time to relief the segment commit timeout situation, as well the concurrent pressure on the controller side. Ref:

2025/04/13 14:06:28.978 INFO [RealtimeSegmentDataManager_test_realtime_2__2__0__20250413T1148Z] [test_realtime_2__2__0__20250413T1148Z] Controller response {"buildTimeSec":-1,"streamPartitionMsgOffset":null,"status":"FAILED","isSplitCommitType":true} for http://pinot-controller-2.pinot-controller-headless.pinot.svc.cluster.local:9000/segmentUpload?segmentSizeBytes=242027065&reason=rowLimit&buildTimeMillis=47403&streamPartitionMsgOffset=1393724252&instance=Server_pinot-server-0.pinot-server-headless.pinot.svc.cluster.local_8098&name=test_realtime_2__2__0__20250413T1148Z&rowCount=10000000&memoryUsedBytes=519779170
2025/04/13 14:06:28.978 WARN [RealtimeSegmentDataManager_test_realtime_2__2__0__20250413T1148Z] [test_realtime_2__2__0__20250413T1148Z] Controller response was FAILED and not COMMIT_SUCCESS
2025/04/13 14:06:28.978 INFO [RealtimeSegmentDataManager_test_realtime_2__2__0__20250413T1148Z] [test_realtime_2__2__0__20250413T1148Z] Could not commit segment. Retrying after hold

extract segment metadata

  • upload segment tarball to deep store

  • update zookeeper segment metadata

  • complete the protocol

  • For large realtime segment, suggest to use DOWNLOAD for completionMode so the other server replicas won't waste CPU cycles to build segments. Ref: Realtime Table Config -> SegmentsConfig

  • Limit the concurrent realtime segment build by configure Pinot servers:

  • Decoupling Controller from the Data Patharrow-up-right
    Server Config
    realtime.max.parallel.segment.builds=2

    Troubleshoot issues with the multi-stage query engine (v2)

    Troubleshoot issues with the multi-stage query engine (v2).

    Learn how to troubleshoot errors when using the multi-stage query engine (v2), and see multi-stage query engine limitations.

    Find instructions on how to enable the multi-stage query enginearrow-up-right, or see a high-level overview of how the multi-stage query engine works.

    hashtag
    Limitations of the multi-stage query engine

    We are continuously improving the multi-stage query engine. A few limitations to call out:

    hashtag
    Support for multi-value columns is limited

    Support for multi-value columns is limited to projections, and predicates must use the arrayToMv function. For example, to successfully run the following query:

    You must include arrayToMv in the query as follows:

    hashtag
    Schema and other prefixes are not supported

    Schema and other prefixes are not supported in queries. For example, the following queries are not supported:

    Queries without prefixes are supported:

    hashtag
    Modifying query behavior based on the cluster config is not supported

    Modifying query behavior based on the cluster configuration is not supported. distinctcounthll, distinctcounthllmv, distinctcountrawhll, and distinctcountrawhllmv will always use the default value for log2m in the multi-stage engine unless the value is explicitly defined in the query itself. Therefore, the following query may produce different results in single-stage and multi-stage engine depending on your cluster configuration (default.hyperloglog.log2m):

    To ensure same results across both query engines, specify the log2m param value explicitly in your query:

    hashtag
    Ambiguous reference to a projected column in statement clauses

    If a column is repeated more than once in SELECT statement, that column requires disambiguate aliasing. For example, in the following query, the reference to colA is ambiguous whether it's to the first or second projected colA:

    The solution is to rewrite the query either use aliasing:

    Or use index-based referencing:

    hashtag
    Tightened restriction on function signature and type matching

    Pinot single-stage query engine automatically do implicit type casts in many of the situations, for example when running the following:

    it will automatically convert both values to long datatypes before comparison. This behavior however could cause issues and thus it is not so widely applied in the multi-stage engine where a stricter datatype conformance is enforced. the example above should be explicitly written as:

    hashtag
    Default names for projections with function calls

    Default names for projections with function calls are different between single and multi-stage.

    • For example, in multi-stage, the following query:

    Returns the following result:

    • In single-stage, the following function:

    Returns the following result:

    hashtag
    Table names and column names are case sensitive

    In multi-stage, table and column names and are case sensitive. In single-stage they were not. For example, the following two queries are not equivalent in multi-stage engine:

    select * from myTable

    select * from mytable

    circle-info

    Note: Function names are not case sensitive in neither single nor multi-stage.

    hashtag
    Arbitrary number of arguments isn't supported

    An arbitrary number of arguments is no longer supported in multi-stage. For example, in single-stage, the following query worked:

    In multi-stage, this query must be rewritten as follows:

    circle-info

    Note: Remember that select 1 + 2 + 3 + 4 + 5 from table is still valid in multi-stage

    hashtag
    Return type for binary arithmetic operators (+, -, *, /)

    In the single-stage engine, these operators would always result in a DOUBLE value being returned, no matter the operand types. In the multi-stage engine, however, the result type depends on the input operand types - for instance, adding two LONG values will result in a LONG and so on.

    hashtag
    Return type for aggregations like SUM, MIN, MAX

    In the single-stage engine, these aggregations would always result in a DOUBLE value being returned, no matter the operand types. In the multi-stage engine, however, the result type depends on the data type of the column being aggregated.

    hashtag
    NULL function support

    Null handling is not supported when tables use table based null storing. You have to use column based null storing instead. See .

    hashtag
    Custom transform function support

    In multi-stage:

    • The histogram function is not supported.

    • The timeConvert function is not supported, see dateTimeConvert for more details.

    hashtag
    Custom aggregate function support

    • Aggregate functions that requires literal input (such as percentile, firstWithTime) might result in a non-compilable query plan.

    hashtag
    Different type names

    The multi-stage engine uses different type names than the single-stage engine. Although the classical names must still be used in schemas and some SQL expressions, the new names must be used in CAST expressions.

    The following table shows the differences in type names:

    Single-stage engine
    Multi-stage engine

    hashtag
    Varbinary literals

    VARBINARY literals in multi-stage engine must be prefixed with X or x. For example, the following query:

    In single-stage engine the same query would be:

    hashtag
    Troubleshoot errors

    Troubleshoot semantic/runtime errors and timeout errors.

    hashtag
    Semantic/runtime errors

    • Try downloading the latest docker image or building from the latest master commit.

      • We continuously push bug fixes for the multi-stage engine so bugs you encountered might have already been fixed in the latest master build.

    • Try rewriting your query.

    hashtag
    Timeout errors

    • Try reducing the size of the table(s) used.

      • Add higher selectivity filters to the tables.

    • Try executing part of the subquery or a simplified version of the query first.

    hashtag

    The dateTimeConvertWindowHop function is not supported.
  • Array & Map-related functions are not supported.

  • DOUBLE

    DOUBLE

    INTERVAL

    INTERVAL

    TIMESTAMP

    TIMESTAMP

    STRING

    VARCHAR

    BYTES

    VARBINARY

    -

    ARRAY

    JSON

    -

    Some functions previously supported in the single-stage query engine (v1) may have a new way to express in the multi-stage engine (v2). Check and see if you are using any non-standard SQL functions or semantics.

    This helps to determine the selectivity and scale of the query being executed.

  • Try adding more servers.

    • The new multi-stage engine runs distributed across the entire cluster, so adding more servers to partitioned queries such as GROUP BY aggregates, and equality JOINs help speed up the query runtime.

  • NULL

    NULL

    BOOLEAN

    BOOLEAN

    INT

    INT

    LONG

    BIGINT

    BIG_DECIMAL

    DECIMAL

    FLOAT

    FLOAT/REAL

    null handling supportarrow-up-right
    -- example 1: used in GROUP-BY
    SELECT count(*), RandomAirports FROM airlineStats 
    GROUP BY RandomAirports
    
    -- example 2: used in PREDICATE
    SELECT * FROM airlineStats WHERE RandomAirports IN ('SFO', 'JFK')
    
    -- example 3: used in ORDER-BY
    SELECT count(*), RandomAirports FROM airlineStats 
    GROUP BY RandomAirports
    ORDER BY RandomAirports DESC
    -- example 1: used in GROUP-BY
    SELECT count(*), arrayToMv(RandomAirports) FROM airlineStats 
    GROUP BY arrayToMv(RandomAirports)
    
    -- example 2: used in PREDICATE
    SELECT * FROM airlineStats WHERE arrayToMv(RandomAirports) IN ('SFO', 'JFK')
    
    -- example 3: used in ORDER-BY
    SELECT count(*), arrayToMV(RandomAirports) FROM airlineStats 
    GROUP BY arrayToMV(RandomAirports)
    ORDER BY arrayToMV(RandomAirports) DESC
    SELECT* from default.myTable;
    SELECT * from schemaName.myTable;
    SELECT * from myTable;
    select distinctcounthll(col) from myTable
    select distinctcounthll(col, 8) from myTable
    SELECT colA, colA, COUNT(*)
    FROM myTable GROUP BY colA ORDER BY colA
    SELECT colA AS tmpA, colA as tmpB, COUNT(*) 
    FROM myTable GROUP BY tmpA, tmpB ORDER BY tmpA
    SELECT colA, colA, COUNT(*) 
    FROM myTable GROUP BY 1, 2 ORDER BY 1
    timestampCol >= longCol
    CAST(timestampCol AS BIGINT) >= longCol 
      SELECT count(*) from mytable 
        "columnNames": [
            "EXPR$0"
          ],
      SELECT count(*) from mytable
          "columnNames": [
            "count(*)"
          ],
    select add(1,2,3,4,5) from table
    select add(1, add(2,add(3, add(4,5)))) from table
    SELECT col1, col2 FROM myTable where bytesCol = X'4a220e6096b25eadb88358cb44068a3248254675'
    -- not supported in multi-stage
    SELECT col1, col2 FROM myTable where bytesCol = '4a220e6096b25eadb88358cb44068a3248254675'

    Troubleshoot issues with ZooKeeper znodes

    Troubleshoot issues with Zookeeper znodes.

    Pinot stores cluster metadata, including schema definitions, table configuration, and segment assignment, in ZooKeeper. Internally, Zookeeper uses a hierarchical namespace, similar to a file system. The nodes in this hierarchy are called znodes and they can store data and have child nodes.

    The default maximum size of znodes is 1MB, which is sufficient for most deployments. However, if you have 100s of thousands of segments, it's possible that this size limit is exceeded. If this happens, you will see an error message similar to the following:

    java.io.ioexception: packet len 1250829 is out of range!

    To address the size limit is exceeded error, do the following:

    • Reduce the number of segments

    • Adjust ZooKeeper znode size

    hashtag
    Reduce the number of segments

    Reduce the number of segments to reduce the metadata stored in the IDEALSTATE and EXTERNALVIEW znodes, which are the two znodes most likely to exceed 1MB.

    To do this for new segments, configure the to a higher value. For existing segments, run the .

    hashtag
    Adjust Zookeeper znode size

    Adjust the maximum size of znodes in ZooKeeper. To do this, configure the jute.maxbuffer Java system property, which defines the maximum znode size in bytes. To read more about this property, see the .

    We recommend setting this value to 4MB.

    Set this parameter to all ZooKeeper node first then restart all the ZooKeeper nodes.

    After this, we need to set the JVM Opt: -Djute.maxbuffer=4000000 in all the pinot components, then restart all the Pinot components to allow Pinot interacts with ZooKeeper using larger jute buffer size.

    segment thresholdarrow-up-right
    Minion merge rollup taskarrow-up-right
    ZooKeeper documentationarrow-up-right