Pinot stores cluster metadata, including schema definitions, table configuration, and segment assignment, in ZooKeeper. Internally, Zookeeper uses a hierarchical namespace, similar to a file system. The nodes in this hierarchy are called znodes and they can store data and have child nodes.
The default maximum size of znodes is 1MB, which is sufficient for most deployments. However, if you have 100s of thousands of segments, it's possible that this size limit is exceeded. If this happens, you will see an error message similar to the following:
java.io.ioexception: packet len 1250829 is out of range!
To address the size limit is exceeded error, do the following:
Reduce the number of segments
Adjust ZooKeeper znode size
Reduce the number of segments
Reduce the number of segments to reduce the metadata stored in the IDEALSTATE and EXTERNALVIEW znodes, which are the two znodes most likely to exceed 1MB.
To do this for new segments, configure the to a higher value. For existing segments, run the .
Adjust Zookeeper znode size
Adjust the maximum size of znodes in ZooKeeper. To do this, configure the jute.maxbuffer Java system property, which defines the maximum znode size in bytes. To read more about this property, see the .
We recommend setting this value to 4MB. Set this parameter to the same value on the ZooKeeper node and clients that are interacting with ZooKeeper (all Pinot components).
We are continuously improving the multi-stage query engine. A few limitations to call out:
Support for multi-value columns is limited
Support for multi-value columns is limited to projections, and predicates must use the arrayToMv function. For example, to successfully run the following query:
You must include arrayToMv in the query as follows:
Schema and other prefixes are not supported
Schema and other prefixes are not supported in queries. For example, the following queries are not supported:
Queries without prefixes are supported:
Modifying query behavior based on the cluster config is not supported
Modifying query behavior based on the cluster configuration is not supported. distinctcounthll, distinctcounthllmv, distinctcountrawhll, and `distinctcountrawhllmv` use a different default value of log2mParam in the multi-stage engine. In multi-stage, this value can no longer be configured. Therefore, the following query may produce different results in single-stage and multi-stage engine:
To ensure multi-stage returns the same result, specify the log2mParam value in your query:
Ambiguous reference to a projected column in statement clauses
If a column is repeated more than once in SELECT statement, that column requires disambiguate aliasing. For example, in the following query, the reference to colA is ambiguous whether it's to the first or second projected colA:
The solution is to rewrite the query either use aliasing:
Or use index-based referencing:
Tightened restriction on function naming
Pinot single-stage query engine automatically removes the underscore _ character from function names. So co_u_n_t()is equivalent to count().
In multi-stage, function naming restrictions were tightened, so the underscore(_) character is only allowed to separate word boundaries in a function name. Also camel case is supported in function names. For example, the following function names are allowed:
Tightened restriction on function signature and type matching
Pinot single-stage query engine automatically do implicit type casts in many of the situations, for example when running the following:
it will automatically convert both values to long datatypes before comparison. This behavior however could cause issues and thus it is not so widely applied in the multi-stage engine where a stricter datatype conformance is enforced. the example above should be explicitly written as:
Default names for projections with function calls
Default names for projections with function calls are different between single and multi-stage.
For example, in multi-stage, the following query:
Returns the following result:
In single-stage, the following function:
Returns the following result:
Table names and column names are case sensitive
In multi-stage, table and column names and are case sensitive. In single-stage they were not. For example, the following two queries are not equivalent in multi-stage engine:
select * from myTable
select * from mytable
Note: Function names are not case sensitive in neither single nor multi-stage.
Arbitrary number of arguments isn't supported
An arbitrary number of arguments is no longer supported in multi-stage. For example, in single-stage, the following query worked:
In multi-stage, this query must be rewritten as follows:
Note: Remember that select 1 + 2 + 3 + 4 + 5 from table is still valid in multi-stage
NULL function support
Null handling is not supported when tables use table based null storing. You have to use column base null storing instead. See
Custom transform function support
In multi-stage:
The histogram function is not supported.
The timeConvert function is not supported, see dateTimeConvert for more details.
Custom aggregate function support
aggregate function that requires literal input (such as percentile, firstWithTime) might result in a non-compilable query plan.
Different type names
The multi-stage engine uses different type names than the single-stage engine. Although the classical names must still be used in schemas and some SQL expressions, the new names must be used in CAST expressions.
The following table shows the differences in type names:
Single-stage engine
Multi-stage engine
Varbinary literals
VARBINARY literals in multi-stage engine must be prefixed with X or x. For example, the following query:
In single-stage engine the same query would be:
Troubleshoot errors
Troubleshoot semantic/runtime errors and timeout errors.
Semantic/runtime errors
Try downloading the latest docker image or building from the latest master commit.
We continuously push bug fixes for the multi-stage engine so bugs you encountered might have already been fixed in the latest master build.
Try rewriting your query.
Timeout errors
Try reducing the size of the table(s) used.
Add higher selectivity filters to the tables.
Try executing part of the subquery or a simplified version of the query first.
The dateTimeConvertWindowHop function is not supported.
Array & Map-related functions are not supported.
DOUBLE
DOUBLE
INTERVAL
INTERVAL
TIMESTAMP
TIMESTAMP
STRING
VARCHAR
BYTES
VARBINARY
-
ARRAY
JSON
-
Some functions previously supported in the single-stage query engine (v1) may have a new way to express in the multi-stage engine (v2). Check and see if you are using any non-standard SQL functions or semantics.
This helps to determine the selectivity and scale of the query being executed.
Try adding more servers.
The new multi-stage engine runs distributed across the entire cluster, so adding more servers to partitioned queries such as GROUP BY aggregates, and equality JOINs help speed up the query runtime.
-- example 1: used in GROUP-BY
SELECT count(*), RandomAirports FROM airlineStats
GROUP BY RandomAirports
-- example 2: used in PREDICATE
SELECT * FROM airlineStats WHERE RandomAirports IN ('SFO', 'JFK')
-- example 3: used in ORDER-BY
SELECT count(*), RandomAirports FROM airlineStats
GROUP BY RandomAirports
ORDER BY RandomAirports DESC
-- example 1: used in GROUP-BY
SELECT count(*), arrayToMv(RandomAirports) FROM airlineStats
GROUP BY arrayToMv(RandomAirports)
-- example 2: used in PREDICATE
SELECT * FROM airlineStats WHERE arrayToMv(RandomAirports) IN ('SFO', 'JFK')
-- example 3: used in ORDER-BY
SELECT count(*), arrayToMV(RandomAirports) FROM airlineStats
GROUP BY arrayToMV(RandomAirports)
ORDER BY arrayToMV(RandomAirports) DESC
SELECT* from default.myTable;
SELECT * from schemaName.myTable;
SELECT * from myTable;
select distinctcounthll(col) from myTable
select distinctcounthll(col, 8) from myTable
SELECT colA, colA, COUNT(*)
FROM myTable GROUP BY colA ORDER BY colA
SELECT colA AS tmpA, colA as tmpB, COUNT(*)
FROM myTable GROUP BY tmpA, tmpB ORDER BY tmpA
SELECT colA, colA, COUNT(*)
FROM myTable GROUP BY 1, 2 ORDER BY 1
is_distinct_from(...)
isDistinctFrom(...)
timestampCol >= longCol
CAST(timestampCol AS BITINT) >= longCol
SELECT count(*) from mytable
"columnNames": [
"EXPR$0"
],
SELECT count(*) from mytable
"columnNames": [
"count(*)"
],
select add(1,2,3,4,5) from table
select add(1, add(2,add(3, add(4,5)))) from table
SELECT col1, col2 FROM myTable where bytesCol = X'4a220e6096b25eadb88358cb44068a3248254675'
-- not supported in multi-stage
SELECT col1, col2 FROM myTable where bytesCol = '4a220e6096b25eadb88358cb44068a3248254675'