Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Range indexing allows user to get better performance for queries which involve filtering over a range. e.g.
SELECT COUNT(*) from baseballStats where hits > 11
Range index is just a variant of inverted index. Instead of creating mapping from values to columns, we create mapping of a range of values to columns. You can use the range index by setting the following config in the table config json.
Currently, range indexing is only supported for dictionary columns. Range indexing support for raw value columns is WIP.
A good thumb rule is to use range index when you want to apply range predicates on metric columns which have very large number of unique values. Using inverted index for such columns will create a very large index that is inefficient in terms of storage and performance.
Bloom filter helps prune segments that do not contain any record matching a EQUALITY predicate, e.g.
SELECT COUNT(*) from baseballStats where playerID = 12345
There are 3 parameters to configure the bloom filter:
fpp
: False positive probability of the bloom filter (from 0
to 1
, 0.05
by default). The lower the fpp
, the higher accuracy the bloom filter has, but it will also increase the size of the bloom filter.
maxSizeInBytes
: Maximum size of the bloom filter (unlimited by default). If a certain fpp
generates a bloom filter larger than this size, we will increase the fpp
to keep the bloom filter size within this limit.
loadOnHeap
: Whether to load the bloom filter using heap memory or off-heap memory (false
by default).
There are 2 ways of configuring bloom filter for a table in the table config:
Configure bloom filter columns with default settings
Configure bloom filter columns with customized parameters
Currently bloom filter can only be applied to the dictionary-encoded columns. Bloom filter support for raw value columns is WIP.
This page describes the different indexing techniques available in Pinot
Pinot supports the following indexing techniques:
Dictionary-encoded forward index with bit compression
Raw value forward index
Sorted forward index with run-length encoding
Bitmap inverted index
Sorted inverted index
Each of these techniques has advantages in different query scenarios. By default, Pinot creates a dictionary-encoded forward index for each column.
There are 2 ways to create indexes for a Pinot table.
Indexing is enabled by specifying the desired column names in the table config. More details about how to configure each type of index can be found in the respective index's section above or in the Table Config section.
Indexes can also be dynamically added to or removed from segments at any point. Update your table config with the latest set of indexes you wish to have. For example, if you had an inverted index on foo
and now want to include bar
, you would update your table config from this:
To this:
Next, invoke reload API. This API sends reload messages via Helix to all servers, as part of which indexes are added or removed from the local segments. This happens without any downtime and is completely transparent to the queries. In the case of the addition of an index, only the new index is created and appended to the existing segment. In the case of the removal of an index, its related states are cleaned up from Pinot servers. You can find this API under Segments
tab on Swagger:
Or you can also find this action on the Pinot UI, on the specific table's page.
The inverted index will provide good performance for most use cases, especially if your use case doesn't have a strict low latency requirement. You should start by using this, and if your queries aren't fast enough, switch to advanced indices like the sorted or Star-Tree index.
For each unique value from a column, we assign an id to it, and build a dictionary from the id to the value. Then in the forward index, we only store the bit-compressed ids instead of the values. With few number of unique values, dictionary-encoding can significantly improve the space efficiency of the storage.
The below diagram shows the dictionary encoding for two columns with integer
and string
types. As seen in the colA
, dictionary encoding will save significant amount of space for duplicated values. On the other hand, colB
has no duplicated data. Dictionary encoding will not compress much data in this case where there are a lot of unique values in the column. For string
type, we pick the length of the longest value and use it as the length for dictionary’s fixed length value array. In this case, padding overhead can be high if there are a large number of unique values for a column.
In contrast to the dictionary-encoded forward index, raw value forward index directly stores values instead of ids.
Without the dictionary, the dictionary lookup step can be skipped for each value fetch. Also, the index can take advantage of the good locality of the values, thus improve the performance of scanning large number of values.
A typical use case to apply raw value forward index is when the column has a large number of unique values and the dictionary does not provide much compression. As seen the above diagram for dictionary encoding, scanning values with a dictionary involves a lot of random access because we need to perform dictionary look up. On the other hand, we can scan values sequentially with raw value forward index and this can improve performance a lot when applied appropriately.
Raw value forward index can be configured for a table by setting it in the table config as
When a column is physically sorted, Pinot uses a sorted forward index with run-length encoding on top of the dictionary-encoding. Instead of saving dictionary ids for each document id, we store a pair of start and end document id for each value. (The below diagram does not include dictionary encoding layer for simplicity.)
Sorted forward index has the advantages of both good compression and data locality. Sorted forward index can also be used as inverted index.
Sorted index can be configured for a table by setting it in the table config as
Note: A given Pinot table can only have 1 sorted column
Real-time server will sort data on sortedColumn
when generating segment internally. For offline push, input data needs to be sorted before running Pinot segment conversion and push job.
When applied correctly, one can find the following information on the segment metadata.
JSON index can be applied to JSON string columns to accelerate the value lookup and filtering for the column.
JSON string can be used to represent the array, map, nested field without forcing a fixed schema. It is very flexible, but the flexibility comes with a cost - filtering on JSON string column is very expensive.
Suppose we have some JSON records similar to the following sample record stored in the person
column:
Without an index, in order to look up a key and filter records based on the value, we need to scan and reconstruct the JSON object from the JSON string for every record, look up the key and then compare the value.
For example, in order to find all persons whose name is "adam", the query will look like:
JSON index is designed to accelerate the filtering on JSON string columns without scanning and reconstructing all the JSON objects.
To enable the JSON index, set the following config in the table config:
Note that JSON index can only be applied to STRING
columns whose values are JSON strings.
JSON index can be used via the JSON_MATCH
predicate: JSON_MATCH(<column>, '<filterExpression>')
. For example, to find all persons whose name is "adam", the query will look like:
Note that the quotes within the filter expression need to be escaped.
In release 0.7.1
, we use the old syntax for filterExpression
: 'name=''adam'''
Find all persons whose name is "adam":
In release 0.7.1
, we use the old syntax for filterExpression: 'name=''adam'''
Find all persons who have an address (one of the addresses) with number 112:
In release 0.7.1
, we use the old syntax for filterExpression: 'addresses.number=112'
Find all persons whose name is "adam" and also have an address (one of the addresses) with number 112:
In release 0.7.1
, we use the old syntax for filterExpression: 'name=''adam'' AND addresses.number=112'
Find all persons whose first address has number 112:
In release 0.7.1
, we use the old syntax for filterExpression: '"addresses[0].number"=112'
Find all persons who have phone field within the JSON:
In release 0.7.1
, we use the old syntax for filterExpression: 'phone IS NOT NULL'
Find all persons whose first address does not contain floor field within the JSON:
In release 0.7.1
, we use the old syntax for filterExpression: '"addresses[0].floor" IS NULL'
The JSON context is maintained for object elements within an array, i.e. the filter won't cross match different objects in the array.
To find all persons who live on "main st" in "ca":
This query won't match "adam" because none of his addresses matches both the street and the country.
If JSON context is not desired, use multiple separate JSON_MATCH
predicates. E.g. to find all persons who have addresses on "main st" and have addressed in "ca" (doesn't have to be the same address):
This query will match "adam" because one of his addressed matches the street and another one matches the country.
Note that the array index is maintained as a separate entry within the element, so in order to query different elements within an array, multiple JSON_MATCH
predicates are required. E.g. to find all persons who have first address on "main st" and second address on "second st":
See examples above.
To find the records with array element "item1" in "arrayCol":
To find the records with second array element "item2" in "arrayCol":
To find the records with value 123 in "valueCol":
To find the records with null in "nullableCol":
In release 0.7.1
, json string must be object (cannot be null
, value or array); multi-dimensional array is not supported.
The key (left-hand side) of the filter expression must be the leaf level of the JSON object, e.g. "$.addresses[*]"='main st'
won't work.
This page talks about geospatial support in Pinot.
Pinot supports SQL/MM geospatial data and is compliant with the Open Geospatial Consortium’s (OGC) OpenGIS Specifications. This includes:
Geospatial data types, such as point, line and polygon;
Geospatial functions, for querying of spatial properties and relationships.
Geospatial indexing, used for efficient processing of spatial operations
Geospatial data types abstract and encapsulate spatial structures such as boundary and dimension. In many respects, spatial data types can be understood simply as shapes. Pinot supports the Well-Known Text (WKT) and Well-Known Binary (WKB) form of geospatial objects, for example:
POINT (0, 0)
LINESTRING (0 0, 1 1, 2 1, 2 2)
POLYGON (0 0, 10 0, 10 10, 0 10, 0 0),(1 1, 1 2, 2 2, 2 1, 1 1)
MULTIPOINT (0 0, 1 2)
MULTILINESTRING ((0 0, 1 1, 1 2), (2 3, 3 2, 5 4))
MULTIPOLYGON (((0 0, 4 0, 4 4, 0 4, 0 0), (1 1, 2 1, 2 2, 1 2, 1 1)), ((-1 -1, -1 -2, -2 -2, -2 -1, -1 -1)))
GEOMETRYCOLLECTION(POINT(2 0),POLYGON((0 0, 1 0, 1 1, 0 1, 0 0)))
It is common to have data in which the coordinate are geographics
or latitude/longitude.
Unlike coordinates in Mercator or UTM, geographic coordinates are not Cartesian coordinates.
Geographic coordinates do not represent a linear distance from an origin as plotted on a plane. Rather, these spherical coordinates describe angular coordinates on a globe.
Spherical coordinates specify a point by the angle of rotation from a reference meridian (longitude), and the angle from the equator (latitude).
You can treat geographic coordinates as approximate Cartesian coordinates and continue to do spatial calculations. However, measurements of distance, length and area will be nonsensical. Since spherical coordinates measure angular distance, the units are in degrees.
Pinot supports both geometry and geography types, which can be constructed by the corresponding functions as shown in section. And for the geography types, the measurement functions such as ST_Distance
and ST_Area
calculate the spherical distance and area on earth respectively.
For manipulating geospatial data, Pinot provides a set of functions for analyzing geometric components, determining spatial relationships, and manipulating geometries. In particular, geospatial functions that begin with the ST_
prefix support the SQL/MM specification.
Following geospatial functions are available out of the box in Pinot-
ST_Union(geometry[] g1_array) → Geometry This aggregate function returns a MULTI geometry or NON-MULTI geometry from a set of geometries. it ignores NULL geometries.
ST_GeomFromText(String wkt) → Geometry Returns a geometry type object from WKT representation, with the optional spatial system reference.
ST_GeomFromWKB(bytes wkb) → Geometry Returns a geometry type object from WKB representation.
ST_Point(double x, double y) → Point Returns a geometry type point object with the given coordinate values.
ST_Polygon(String wkt) → Polygon Returns a geometry type polygon object from WKT representation.
ST_GeogFromWKB(bytes wkb) → Geography Creates a geography instance from a Well-Known Binary geometry representation (WKB)
ST_GeogFromText(String wkt) → Geography Return a specified geography value from Well-Known Text representation or extended (WKT).
ST_Area(Geometry/Geography g) → double For geometry type, it returns the 2D Euclidean area of a geometry. For geography, returns the area of a polygon or multi-polygon in square meters using a spherical model for Earth.
ST_Distance(Geometry/Geography g1, Geometry/Geography g2) → double For geometry type, returns the 2-dimensional cartesian minimum distance (based on spatial ref) between two geometries in projected units. For geography, returns the great-circle distance in meters between two SphericalGeography points. Note that g1, g2 shall have the same type.
ST_GeometryType(Geometry g) → String Returns the type of the geometry as a string. e.g.: ST_Linestring
, ST_Polygon
,ST_MultiPolygon
etc.
ST_AsBinary(Geometry/Geography g) → bytes Returns the WKB representation of the geometry.
ST_AsText(Geometry/Geography g) → string Returns the WKT representation of the geometry/geography.
ST_Contains(Geometry, Geometry) → boolean Returns true if and only if no points of the second geometry/geography lie in the exterior of the first geometry/geography, and at least one point of the interior of the first geometry lies in the interior of the second geometry.
ST_Equals(Geometry, Geometry) → boolean Returns true if the given geometries represent the same geometry/geography.
ST_Within(Geometry, Geometry) → boolean Returns true if first geometry is completely inside second geometry.
Geospatial functions are typically expensive to evaluate, and using geoindex can greatly accelebrate the query evaluation. Geoindexing in Pinot is based on Uber’s H3, a hexagons-based hierarchical gridding. A given geospatial location (longitude, latitude) can map to one hexagon (represented as H3Index). And its neighbors in H3 can be approximated by a ring of hexagons. To quickly identify the distance between any given two geospatial locations, we can convert the two locations in the H3Index, and then check the H3 distance between them. H3 distance is measured as the number of hexagons. For example, in the diagram below, the red hexagons are within the 1 distance of the central hexagon. Moreover, the size of the hexagon is determined by the resolution of the indexing. Please check this table for the level of resolutions and the corresponding precision (measured in km).
To use the geoindex, first declare the geolocation field as bytes in the schema, as in the example of the QuickStart example.
Note the use of transformFunction
that converts the created point into SphericalGeography
format, which is needed in the ST_Distance
function.
Next, declare the geospatial index in the table config:
So the query below will use the geoindex to filter the Starbucks stores within 5km of the given point in the bay area.
Geoindex in Pinot accelerates the query evaluation without compromising the correctness of the query result. Currently, geoindex supports the ST_Distance
function used in the range predicates in the WHERE
clause, as shown in the query example in the previous section.
At the high level, geoindex is used for retrieving the records within the nearby hexagons of the given location, and then use ST_Distance
to accurately filter the matched results.
As in the example diagram above, if we want to find all relevant points within a given distance at San Francisco (represented in the area within the red circle), then the algorithm with geoindex works as the following:
Find the H3 distance x
that contains the range (i.e. red circle)
For the points within the H3 distance (i.e. covered by the hexagons within kRing(x)
), we can directly take those points without filtering
For the points falling into the H3 distance (i.e. in the hexagons of kRing(x)
), we do filtering on them by evaluating the condition ST_Distance(loc1, loc2) < x
When inverted index is enabled for a column, Pinot maintains a map from each value to a bitmap, which makes value lookup to be constant time. When you have a column that is used for filtering frequently, adding an inverted index will improve the performance greatly.
Inverted index can be configured for a table by setting it in the table config as
Sorted forward index can directly be used as inverted index, with log(n)
time lookup and it can benefit from data locality.
For the below example, if the query has a filter on memberId
, Pinot will perform binary search on memberId
values to find the range pair of docIds for corresponding filtering value. If the query requires to scan values for other columns after filtering, values within the range docId pair will be located together; therefore, we can benefit a lot from data locality.
Sorted index performs much better than inverted index; however, it can only be applied to one column. When the query performance with inverted index is not good enough and most of queries have a filter on a specific column (e.g. memberId), sorted index can improve the query performance.
This page talks about support for text search functionality in Pinot.
Pinot supports super fast query processing through its indexes on non-BLOB like columns. Queries with exact match filters are run efficiently through a combination of dictionary encoding, inverted index and sorted index. An example:
In the above query, we are doing exact match on two columns of type STRING and INT respectively.
For arbitrary text data which falls into the BLOB/CLOB territory, we need more than exact matches. Users are interested in doing regex, phrase, fuzzy queries on BLOB like data. Before 0.3.0, one had to use regexp_like to achieve this. However, this was scan based which was not performant and features like fuzzy search (edit distance search) were not possible.
In version 0.3.0, we added support for text indexes to efficiently do arbitrary search on STRING columns where each column value is a large BLOB of text. This can be achieved by using the new built-in function TEXT_MATCH.
where <column_name> is the column text index is created on and <search_expression> can be:
Search Expression Type
Example
Phrase query
TEXT_MATCH (<column_name>, '\"distributed system\"')
Term Query
TEXT_MATCH (<column_name>, 'Java')
Boolean Query
TEXT_MATCH (<column_name>, 'Java and c++')
Prefix Query
TEXT_MATCH (<column_name>, 'stream*')
Regex Query
TEXT_MATCH (<column_name>, '/Exception.*/')
Text search should ideally be used on STRING columns where doing standard filter operations (EQUALITY, RANGE, BETWEEN) doesn't fit the bill because each column value is a reasonably large blob of text.
Consider the following snippet from Apache access log. Each line in the log consists of arbitrary data (IP addresses, URLs, timestamps, symbols etc) and represents a column value. Data like this is a good candidate for doing text search.
Let's say the following snippet of data is stored in ACCESS_LOG_COL column in Pinot table.
Few examples of search queries on this data:
Count the number of GET requests.
Count the number of POST requests that have administrator in the URL (administrator/index)
Count the number of POST requests that have a particular URL and handled by Firefox browser
Consider another example of simple resume text. Each line in the file represents skill-data from resumes of different candidates
Let's say the following snippet of data is stored in SKILLS_COL column in Pinot table. Each line in the input text represents a column value.
Few examples of search queries on this data:
Count the number of candidates that have "machine learning" and "gpu processing" - a phrase search (more on this further in the document) where we are looking for exact match of phrases "machine learning" and "gpu processing" not necessarily in the same order in original data.
Count the number of candidates that have "distributed systems" and either 'Java' or 'C++' - a combination of searching for exact phrase "distributed systems" along with other terms.
Consider a snippet from a log file containing SQL queries handled by a database. Each line (query) in the file represents a column value in QUERY_LOG_COL column in Pinot table.
Few examples of search queries on this data:
Count the number of queries that have GROUP BY
Count the number of queries that have the SELECT count... pattern
Count the number of queries that use BETWEEN filter on timestamp column along with GROUP BY
Further sections in the document cover several concrete examples on each kind of query and step-by-step guide on how to write text search queries in Pinot.
Currently we support text search in a restricted manner. More specifically, we have the following constraints:
The column type should be STRING.
The column should be single-valued.
Co-existence of text index with other Pinot indexes is currently not supported.
The last two restrictions are going to be relaxed very soon in the upcoming releases.
Currently, a column in Pinot can be dictionary encoded or stored RAW. Furthermore, we can create inverted index on the dictionary encoded column. We can also create a sorted index on the dictionary encoded column.
Text index is an addition to the type of per-column indexes users can create in Pinot. However, the current implementation supports text index on RAW column. In other words, the column should not be dictionary encoded. As we relax this constraint in upcoming releases, text index can be created on a dictionary encoded column that also has other indexes (inverted, sorted etc).
Similar to other indexes, users can enable text index on a column through table config. As part of text-search feature, we have also introduced a new generic way of specifying the per-column encoding and index information. In the table config, there will be a new section with name "fieldConfigList".
fieldConfigList
is currently ONLY used for text indexes. Our plan is to migrate all other indexes to this model. We are going to do that in upcoming releases and accordingly modify user documentation. So please continue to specify other index info in table config as you have done till now and use the fieldConfigList
only for text indexes.
"fieldConfigList" will be a new section in table config. It is essentially a list of per-column encoding and index information. In the above example, the list contains text index information for two columns text_col_1 and text_col_2. Each object in fieldConfigList contains the following information
name - Name of the column text index is enabled on
encodingType - As mentioned earlier, we can store a column either as RAW or dictionary encoded. Since for now we have a restriction on the text index, this should always be RAW.
indexType - This should be TEXT.
Since we haven't yet removed the old way of specifying the index info, each column that has a text index should also be specified as noDictionaryColumns
in tableIndexConfig
:
The above mechanism can be used to configure text indexes in the following scenarios:
Adding a new table with text index enabled on one or more columns.
Adding a new column with text index enabled to an existing table.
Enabling text index on an existing column.
Once the text index is enabled on one or more columns through table config, our segment generation code will pick up the config and automatically create text index (per column). This is exactly how other indexes in Pinot are created.
Text index is supported for both offline and realtime segments.
The original text document (a value in the column with text index enabled) is parsed, tokenized and individual "indexable" terms are extracted. These terms are inserted into the index.
Pinot's text index is built on top of Lucene. Lucene's standard english text tokenizer generally works well for most classes of text. We might want to build custom text parser and tokenizer to suit particular user requirements. Accordingly, we can make this configurable for the user to specify on per column text index basis.
A new built-in function TEXT_MATCH has been introduced for using text search in SQL/PQL.
TEXT_MATCH(text_column_name, search_expression)
text_column_name - name of the column to do text search on.
search_expression - search query
We can use TEXT_MATCH function as part of our queries in the WHERE clause. Examples:
We can also use the TEXT_MATCH filter clause with other filter operators. For example:
Combining multiple TEXT_MATCH filter clauses
TEXT_MATCH can be used in WHERE clause of all kinds of queries supported by Pinot
Selection query which projects one or more columns
User can also include the text column name in select list
Aggregation query
Aggregation GROUP BY query
The search expression (second argument to TEXT_MATCH function) is the query string that Pinot will use to perform text search on the column's text index. **Following expression types are supported
This query is used to do exact match of a given phrase. Exact match implies that terms in the user specified phrase should appear in the exact same order in the original text document. Note that document is referred to as the column value.
Let's take the example of resume text data containing 14 documents to walk through queries. The data is stored in column named SKILLS_COL and we have created a text index on this column.
Example 1 - Search in SKILL_COL column to look for documents where each matching document MUST contain phrase "distributed systems" as is
The search expression is '\"Distributed systems\"'
The search expression is always specified within single quotes '<your expression>'
Since we are doing a phrase search, the phrase should be specified within double quotes inside the single quotes and the double quotes should be escaped
'\"<your phrase>\"'
The above query will match the following documents:
But it won't match the following document:
This is because the phrase query looks for the phrase occurring in the original document "as is". The terms as specified by the user in phrase should be in the exact same order in the original document for the document to be considered as a match.
NOTE: Matching is always done in a case-insensitive manner.
Example 2 - Search in SKILL_COL column to look for documents where each matching document MUST contain phrase "query processing" as is
The above query will match the following documents:
Term queries are used to search for individual terms
Example 3 - Search in SKILL_COL column to look for documents where each matching document MUST contain the term 'java'
As mentioned earlier, the search expression is always within single quotes. However, since this is a term query, we don't have to use double quotes within single quotes.
Boolean operators AND, OR are supported and we can use them to build a composite query. Boolean operators can be used to combine phrase and term queries in any arbitrary manner
Example 4 - Search in SKILL_COL column to look for documents where each matching document MUST contain phrases "distributed systems" and "tensor flow". This combines two phrases using AND boolean operator
The above query will match the following documents:
Example 5 - Search in SKILL_COL column to look for documents where each document MUST contain phrase "machine learning" and term 'gpu' and term 'python'. This combines a phrase and two terms using boolean operator
The above query will match the following documents:
When using boolean operators to combine term(s) and phrase(s) or both, please note that:
The matching document can contain the terms and phrases in any order.
The matching document may not have the terms adjacent to each other (if this is needed, please use appropriate phrase query for the concerned terms).
Use of OR operator is implicit. In other words, if phrase(s) and term(s) are not combined using AND operator in the search expression, OR operator is used by default:
Example 6 - Search in SKILL_COL column to look for documents where each document MUST contain ANY one of:
phrase "distributed systems" OR
term 'java' OR
term 'C++'.
We can also do grouping using parentheses:
Example 7 - Search in SKILL_COL column to look for documents where each document MUST contain
phrase "distributed systems" AND
at least one of the terms Java or C++
In the below query, we group terms Java and C++ without any operator which implies the use of OR. The root operator AND is used to combine this with phrase "distributed systems"
Prefix searches can also be done in the context of a single term. We can't use prefix matches for phrases.
Example 8 - Search in SKILL_COL column to look for documents where each document MUST contain text like stream, streaming, streams etc
The above query will match the following documents:
Phrase and term queries work on the fundamental logic of looking up the terms (aka tokens) in the text index. The original text document (a value in the column with text index enabled) is parsed, tokenized and individual "indexable" terms are extracted. These terms are inserted into the index.
Based on the nature of original text and how the text is segmented into tokens, it is possible that some terms don't get indexed individually. In such cases, it is better to use regular expression queries on the text index.
Consider server log as an example and we want to look for exceptions. A regex query is suitable for this scenario as it is unlikely that 'exception' is present as an individual indexed token.
Syntax of a regex query is slightly different from queries mentioned earlier. The regular expression is written between a pair of forward slashes (/).
The above query will match any text document containing exception.
Generally, a combination of phrase and term queries using boolean operators and grouping should allow us to build a complex text search query expression.
The key thing to remember is that phrases should be used when the order of terms in the document is important and if separating the phrase into individual terms doesn't make sense from end user's perspective.
An example would be phrase "machine learning".
However, if we are searching for documents matching Java and C++ terms, using phrase query "Java C++" will actually result in in partial results (could be empty too) since now we are relying the on the user specifying these skills in the exact same order (adjacent to each other) in the resume text.
Term query using boolean AND operator is more appropriate for such cases
Unlike other index techniques which work on single column, Star-Tree index is built on multiple columns, and utilize the pre-aggregated results to significantly reduce the number of values to be processed, thus improve the query performance.
One of the biggest challenges in realtime OLAP systems is achieving and maintaining tight SLA’s on latency and throughput on large data sets. Existing techniques such as sorted index or inverted index help improve query latencies, but speed-ups are still limited by number of documents necessary to process for computing the results. On the other hand, pre-aggregating the results ensures a constant upper bound on query latencies, but can lead to storage space explosion.
Here we introduce star-tree index to utilize the pre-aggregated documents in a smart way to achieve low query latencies but also use the storage space efficiently for aggregation/group-by queries.
Consider the following data set as an example to discuss the existing approaches:
In this approach, data is sorted on a primary key, which is likely to appear as filter in most queries in the query set.
This reduces the time to search the documents for a given primary key value from linear scan O(n) to binary search O(logn), and also keeps good locality for the documents selected.
While this is a good improvement over linear scan, there are still a few issues with this approach:
While sorting on one column does not require additional space, sorting on additional columns would require additional storage space to re-index the records for the various sort orders.
While search time is reduced from O(n) to O(logn), overall latency is still a function of total number of documents need to be processed to answer a query.
In this approach, for each value of a given column, we maintain a list of document id’s where this value appears.
Below are the inverted indexes for columns ‘Browser’ and ‘Locale’ for our example data set:
For example, if we want to get all the documents where ‘Browser’ is ‘Firefox’, we can simply look up the inverted index for ‘Browser’ and identify that it appears in documents [1, 5, 6].
Using inverted index, we can reduce the search time to constant time O(1). However, the query latency is still a function of the selectivity of the query, i.e. increases with the number of documents need to be processed to answer the query.
In this technique, we pre-compute the answer for a given query set upfront.
In the example below, we have pre-aggregated the total impressions for each country:
Doing so makes answering queries about total impressions for a country just a value lookup, by eliminating the need of processing a large number of documents. However, to be able to answer with multiple predicates implies pre-aggregating for various combinations of different dimensions. This leads to exponential explosion in storage space.
On one end of the spectrum we have indexing techniques that improve search times with limited increase in space, but do not guarantee a hard upper bound on query latencies. On the other end of the spectrum we have pre-aggregation techniques that offer hard upper bound on query latencies, but suffer from exponential explosion of storage space
Space-Time Trade Off Between Different Techniques
We propose the Star-Tree data structure that offers a configurable trade-off between space and time and allows us to achieve hard upper bound for query latencies for a given use case. In the following sections we will define the Star-Tree data structure, and discuss how it is utilized within Pinot for achieving low latencies with high throughput.
Tree structure
Star-tree is a tree data structure that is consisted of the following properties:
Star-tree Structure
Root Node (Orange): Single root node, from which the rest of the tree can be traversed.
Leaf Node (Blue): A leaf node can containing at most T records, where T is configurable.
Non-leaf Node (Green): Nodes with more than T records are further split into children nodes.
Star-Node (Yellow): Non-leaf nodes can also have a special child node called the Star-Node. This node contains the pre-aggregated records after removing the dimension on which the data was split for this level.
Dimensions Split Order ([D1, D2]): Nodes at a given level in the tree are split into children nodes on all values of a particular dimension. The dimensions split order is an ordered list of dimensions that is used to determine the dimension to split on for a given level in the tree.
Node properties
The properties stored in each node are as follows:
Dimension: The dimension which the node is split on
Start/End Document Id: The range of documents this node points to
Aggregated Document Id: One single document which is the aggregation result of all documents pointed by this node
Star-tree index is generated in the following steps:
The data is first projected as per the dimensionsSplitOrder. Only the dimensions from the split order are reserved, others are dropped. For each unique combination of reserved dimensions, metrics are aggregated per configuration. The aggregated documents are written to a file and served as the initial Star-Tree documents (separate from the original documents).
Sort the Star-Tree documents based on the dimensionsSplitOrder. It is primary-sorted on the first dimension in this list, and then secondary sorted on the rest of the dimensions based on their order in the list. Each node in the tree points to a range in the sorted documents.
The tree structure can be created recursively (starting at root node) as follows:
If a node has more than T records, it is split into multiple children nodes, one for each value of the dimension in the split order corresponding to current level in the tree.
A Star-Node can be created (per configuration) for the current node, by dropping the dimension being split on, and aggregating the metrics for rows containing dimensions with identical values. These aggregated documents are appended to the end of the Star-Tree documents.
If there is only one value for the current dimension, Star-Node won’t be created because the documents under the Star-Node are identical to the single node.
The above step is repeated recursively until there are no more nodes to split.
Multiple Star-Trees can be generated based on different configurations (dimensionsSplitOrder, aggregations, T)
Aggregation is configured as a pair of aggregation function and the column to apply the aggregation.
All types of aggregation function with bounded-sized intermediate result are supported.
Supported functions
COUNT
MIN
MAX
SUM
AVG
MIN_MAX_RANGE
DISTINCT_COUNT_HLL
PERCENTILE_EST
PERCENTILE_TDIGEST
DISTINCT_COUNT_BITMAP
NOTE: The intermediate result RoaringBitmap is not bounded-sized, use carefully on high cardinality columns)
Unsupported functions
DISTINCT_COUNT
Intermediate result Set is unbounded
SEGMENT_PARTITIONED_DISTINCT_COUNT:
Intermediate result Set is unbounded
PERCENTILE
Intermediate result List is unbounded
Functions to be supported
DISTINCT_COUNT_THETA_SKETCH
ST_UNION
Multiple index generation configurations can be provided to generate multiple star-trees. Each configuration should contain the following properties:
dimensionsSplitOrder: An ordered list of dimension names can be specified to configure the split order. Only the dimensions in this list are reserved in the aggregated documents. The nodes will be split based on the order of this list. For example, split at level i is performed on the values of dimension at index i in the list.
The star-tree dimension does not have to be a dimension column in the table, it can also be time column, date-time column, or metric column if necessary.
The star-tree dimension column should be dictionary encoded in order to generate the star-tree index.
All columns in the filter and group-by clause of a query should be included in this list in order to use the star-tree index.
skipStarNodeCreationForDimensions (Optional, default empty): A list of dimension names for which to not create the Star-Node.
functionColumnPairs: A list of aggregation function and column pairs (split by double underscore “__”). E.g. SUM__Impressions (SUM of column Impressions) or COUNT__*.
The column within the function-column pair can be either dictionary encoded or raw.
All aggregations of a query should be included in this list in order to use the star-tree index.
maxLeafRecords (Optional, default 10000): The threshold T to determine whether to further split each node.
Default star-tree index can be added to the segment with a boolean config enableDefaultStarTree under the tableIndexConfig.
The default star-tree will have the following configuration:
All dictionary-encoded single-value dimensions with cardinality smaller or equal to a threshold (10000) will be included in the dimensionsSplitOrder, sorted by their cardinality in descending order.
All dictionary-encoded Time/DateTime columns will be appended to the dimensionsSplitOrder following the dimensions, sorted by their cardinality in descending order. Here we assume that time columns will be included in most queries as the range filter column and/or the group by column, so for better performance, we always include them as the last elements in the dimensionsSplitOrder.
Include COUNT(*) and SUM for all numeric metrics in the functionColumnPairs.
Use default maxLeafRecords (10000).
For our example data set, in order to solve the following query efficiently:
We may config the star-tree index as following:
The star-tree and documents should be something like below:
The values in the parentheses are the aggregated sum of Impressions for all the documents under the node.
Star-tree documents
For query execution, the idea is to first check metadata to determine whether the query can be solved with the Star-Tree documents, then traverse the Star-Tree to identify documents that satisfy all the predicates. After applying any remaining predicates that were missed while traversing the Star-Tree to the identified documents, apply aggregation/group-by on the qualified documents.
The algorithm to traverse the tree can be described as follows:
Start from root node.
For each level, what child node(s) to select depends on whether there are any predicates/group-by on the split dimension for the level in the query.
If there is no predicate or group-by on the split dimension, select the Star-Node if exists, or all child nodes to traverse further.
If there are predicate(s) on the split dimension, select the child node(s) that satisfy the predicate(s).
If there is no predicate, but there is a group-by on the split dimension, select all child nodes except Star-Node.
Recursively repeat the previous step until all leaf nodes are reached, or all predicates are satisfied.
Collect all the documents pointed by the selected nodes.
If all predicates and group-by's are satisfied, pick the single aggregated document from each selected node.
Otherwise, collect all the documents in the document range from each selected node.
Country
Browser
Locale
Impressions
CA
Chrome
en
400
CA
Firefox
fr
200
MX
Safari
es
300
MX
Safari
en
100
USA
Chrome
en
600
USA
Firefox
es
200
USA
Firefox
en
400
Browser
Doc Id
Firefox
1,5,6
Chrome
0,4
Safari
2,3
Locale
Doc Id
en
0,3,4,6
es
2,5
fr
1
Country
Impressions
CA
600
MX
400
USA
1200
Country
Browser
Locale
SUM__Impressions
CA
Chrome
en
400
CA
Firefox
fr
200
MX
Safari
en
100
MX
Safari
es
300
USA
Chrome
en
600
USA
Firefox
en
400
USA
Firefox
es
200
CA
*
en
400
CA
*
fr
200
CA
*
*
600
MX
Safari
*
400
USA
Firefox
*
600
USA
*
en
1000
USA
*
es
200
USA
*
*
1200
*
Chrome
en
1000
*
Firefox
en
400
*
Firefox
es
200
*
Firefox
fr
200
*
Firefox
*
800
*
Safari
en
100
*
Safari
es
300
*
Safari
*
400
*
*
en
1500
*
*
es
500
*
*
fr
200
*
*
*
2200