arrow-left

All pages
gitbookPowered by GitBook
1 of 11

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Indexing

This page describes the different indexing techniques available in Pinot

Pinot supports the following indexing techniques:

  • Forward Index

    • Dictionary-encoded forward index with bit compression

    • Raw value forward index

    • Sorted forward index with run-length encoding

    • Bitmap inverted index

    • Sorted inverted index

  • Text Index

Each of these techniques has advantages in different query scenarios. By default, Pinot creates a dictionary-encoded forward index for each column.

hashtag
Enabling indexes

There are 2 ways to create indexes for a Pinot table.

hashtag
As part of ingestion, during Pinot segment generation

Indexing is enabled by specifying the desired column names in the table config. More details about how to configure each type of index can be found in the respective index's section above or in the Table Config section.

hashtag
Dynamically added or removed

Indexes can also be dynamically added to or removed from segments at any point. Update your table config with the latest set of indexes you wish to have.

For example, if you have an inverted index on the foo field and now want to include the bar field, you would update your table config from this:

To this:

The updated index config won't be picked up unless you invoke the reload API. This API sends reload messages via Helix to all servers, as part of which indexes are added or removed from the local segments. This happens without any downtime and is completely transparent to the queries.

When adding an index, only the new index is created and appended to the existing segment. When removing an index, its related states are cleaned up from Pinot servers. You can find this API under the Segments tab on Swagger:

You can also find this action on the , on the specific table's page.

circle-info

Not all indexes can be retrospectively applied to existing segments. For more detailed documentation on applying indexes, see the .

hashtag
Tuning Index

The inverted index provides good performance for most use cases, especially if your use case doesn't have a strict low latency requirement. You should start by using this, and if your queries aren't fast enough, switch to advanced indices like the sorted or Star-Tree index.

Inverted Index
Star-tree Index
Bloom Filter
Range Index
Native Text Index
Text Search Support
Geospatial
JSON Index
Timestamp Index
Cluster Manager in the Pinot UIarrow-up-right
Indexing FAQ
"tableIndexConfig": {
        "invertedIndexColumns": ["foo"],
        ...
    }
"tableIndexConfig": {
        "invertedIndexColumns": ["foo", "bar"],
        ...
    }
curl -X POST \
  "http://localhost:9000/segments/myTable/reload" \
  -H "accept: application/json"

Inverted Index

hashtag
Bitmap inverted index

When an inverted index is enabled for a column, Pinot maintains a map from each value to a bitmap of rows, which makes value lookup take constant time. If you have a column that is frequently used for filtering, adding an inverted index will improve performance greatly.

An inverted index can be configured for a table by setting it in the table config:

{
    "tableIndexConfig": {
        "invertedIndexColumns": [
            "column_name",
            ...
        ],
        ...
    }
}

hashtag
Sorted inverted index

A sorted forward index can directly be used as an inverted index, with log(n) time lookup and it can benefit from data locality.

For the below example, if the query has a filter on memberId, Pinot will perform a binary search on memberId values to find the range pair of docIds for corresponding filtering value. If the query needs to scan values for other columns after filtering, values within the range docId pair will be located together, which means we can benefit from data locality.

A sorted index performs much better than an inverted index, but it can only be applied to one column per table. When the query performance with an inverted index is not good enough and most queries are filtering on the same column (e.g. memberId), a sorted index can improve the query performance.

_images/sorted-inverted.png

Bloom Filter

Bloom filter helps prune segments that do not contain any record matching an EQUALITY predicate.

It would be useful for a query like the following:

SELECT COUNT(*) 
FROM baseballStats 
WHERE playerID = 12345

There are 3 parameters to configure the Bloom Filter:

  • fpp: False positive probability of the bloom filter (from 0 to 1, 0.05 by default). The lower the fpp , the higher accuracy the bloom filter has, but it will also increase the size of the bloom filter.

  • maxSizeInBytes: Maximum size of the bloom filter (unlimited by default). If a certain fpp generates a bloom filter larger than this size, we will increase the fpp to keep the bloom filter size within this limit.

  • loadOnHeap: Whether to load the bloom filter using heap memory or off-heap memory (false by default).

There are 2 ways to configure a bloom filter for a table in the :

  • Default settings

  • Customized parameters

A Bloom Filter can only be applied to . Support for raw value columns is WIP.

Forward Index

The values for every column are stored in a forward index, of which there are three types:

  • Builds a dictionary mapping 0 indexed ids to each unique value in a column and a forward index that contains the bit-compressed ids.

  • Builds a dictionary mapping from each unique value to a pair of start and end document id and a forward index on top of the dictionary encoding.

Range Index

Range indexing allows you to get better performance for queries that involve filtering over a range.

It would be useful for a query like the following:

A range index is a variant of an , where instead of creating a mapping from values to columns, we create mapping of a range of values to columns. You can use the range index by setting the following config in the .

Range index is supported for both dictionary as well as raw encoded columns.

circle-info

Native Text Index

This page talks about native text indices and corresponding search functionality in Pinot

hashtag
History Of Text Indexing And Search in Pinot

Pinot supports text indexing and search by building Lucene indices as "sidecars" to the main Pinot segments. While this is a great technique, it essentially limits the avenues of optimizations that can be done for Pinot specific use cases of text search.

{
  "tableIndexConfig": {
    "bloomFilterColumns": [
      "playerID",
      ...
    ],
    ...
  },
  ...
}
table config
dictionary-encoded columns
hashtag
When to use Range Index?

A good thumb rule is to use a range index when you want to apply range predicates on metric columns that have a very large number of unique values.

Using an inverted index for such columns will create a very large index that is inefficient in terms of storage and performance.

SELECT COUNT(*) 
FROM baseballStats 
WHERE hits > 11
inverted indexarrow-up-right
table config
hashtag
How Is Pinot Different?

Pinot, or any other database/OLAP engine, do not need to conform to the entire full text search DSL that is traditionally used by FTS engines like ElasticSearch and Solr. Looking at traditional SQL like text search use cases, majority of text searches comprise of three patterns -- prefix wildcard queries, postfix wildcard queries and term queries.

hashtag
Native Text Indices in Pinot

Native text indices are built from the ground up. They use a custom text indexing engine, coupled with Pinot's powerful inverted indices, to provide a super fast text search experience.

hashtag
Benefits of Native Text Indices

Native text indices are 80-120% faster than Lucene based indices for the text search use cases mentioned above. They are also 40% smaller on disk.

hashtag
Real Time Indexing And Search

A new feature that native text indices support are real time text search. For REALTIME tables, native text indices allow data to be indexed in memory in the text index, while concurrently supporting text searches on the same index.

Historically, most text indices depend on the in memory text index being written to first and then sealed, before searches are possible. This limits the freshness of the search, being near real time at best.

Native text indices come with a custom in memory text index, which allows for real time indexing and search.

hashtag
Searching Native Text Indices

A new function, TEXT_CONTAINS, is introduced for supporting text search on native text indices.

Examples:

TEXT_CONTAINS can be combined using standard boolean operators

Note that TEXT_CONTAINS supports regex and term queries for now. Also, TEXT_CONTAINS will work only on native indices.

Note that TEXT_CONTAINS supports standard regex patterns (as used by LIKE in SQL Standard). So there might be some syntatical changes from Lucene queries

hashtag
Creating Native Text Indices

Native text indices are a type of text search index that Pinot supports, hence are created through the regular way of using field configs to configure a text index on a given field. To indicate that the index type is native, an additional property in the field config has to be specified:

{
  "tableIndexConfig": {
    "bloomFilterConfigs": {
      "playerID": {
        "fpp": 0.01,
        "maxSizeInBytes": 1000000,
        "loadOnHeap": true
      },
      ...
    },
    ...
  },
  ...
}
{
    "tableIndexConfig": {
        "rangeIndexColumns": [
            "column_name",
            ...
        ],
        ...
    }
}
SELECT COUNT(*) FROM Foo WHERE TEXT_CONTAINS (<column_name>, <search_expression>)
SELECT COUNT(*) FROM Foo WHERE TEXT_CONTAINS (<column_name>, "foo.*")
SELECT COUNT(*) FROM Foo WHERE TEXT_CONTAINS (<column_name>, ".*bar")
SELECT COUNT(*) FROM Foo WHERE TEXT_CONTAINS (<column_name>, "foo")
SELECT COUNT(*) FROM Foo WHERE TEXT_CONTAINS ("col1", "foo") AND TEXT_CONTAINS ("col2", "bar")
"fieldConfigList":[
  {
     "name":"text_col_1",
     "encodingType":"RAW",
     "indexType":"TEXT",
     "properties":{"fstType":"native"}
  }
]

Raw value forward index Builds a forward index of the column's values.

To save segment storage space the forward index can now be disabled while creating new tables.

hashtag
Dictionary-encoded forward index with bit compression (default)

Each unique value from a column is assigned an id and a dictionary is built that maps the id to the value. The forward index stores bit-compressed ids instead of the values. If you have few unique values, dictionary-encoding can significantly improve space efficiency.

The below diagram shows the dictionary encoding for two columns with integer and string types. ForcolA, dictionary encoding saved a significant amount of space for duplicated values.

On the other hand, colB has no duplicated data. Dictionary encoding will not compress much data in this case where there are a lot of unique values in the column. For the string type, we pick the length of the longest value and use it as the length for the dictionary’s fixed-length value array. The padding overhead can be high if there are a large number of unique values for a column.

hashtag
Sorted forward index with run-length encoding

When a column is physically sorted, Pinot uses a sorted forward index with run-length encoding on top of the dictionary-encoding. Instead of saving dictionary ids for each document id, Pinot will store a pair of start and end document ids for each value.

Sorted forward index

(For simplicity, this diagram does not include the dictionary encoding layer.)

The Sorted forward index has the advantages of both good compression and data locality. The Sorted forward index can also be used as an inverted index.

hashtag
Real-time tables

A sorted index can be configured for a table by setting it in the table config:

circle-info

Note: A Pinot table can only have 1 sorted column

Real-time data ingestion will sort data by the sortedColumn when generating segments - you don't need to pre-sort the data.

When a segment is committed, Pinot will do a pass over the data in each column and create a sorted index for any other columns that contain sorted data, even if they aren't specified as the sortedColumn.

hashtag
Offline tables

For offline data ingestion, Pinot will do a pass over the data in each column and create a sorted index for columns that contain sorted data.

This means that if you want a column to have a sorted index, you will need to sort the data by that column before ingesting it into Pinot.

If you are ingesting multiple segments you will need to make sure that data is sorted within each segment - you don't need to sort the data across segments.

hashtag
Checking sort status

You can check the sorted status of a column in a segment by running the following:

Alternatively, for offline tables and for committed segments in real-time tables, you can retrieve the sorted status from the getServerMetadata endpoint. The following example is based on the Batch Quick Start:

hashtag
Raw value forward index

The raw value forward index directly stores values instead of ids.

Without the dictionary, the dictionary lookup step can be skipped for each value fetch. The index can also take advantage of the good locality of the values, thus improving the performance of scanning a large number of values.

The raw value forward index works well for columns that have a large number of unique values where a dictionary does not provide much compression.

As seen in the above diagram, using dictionary encoding will require a lot of random accesses of memory to do those dictionary look-ups. With a raw value forward index, we can scan values sequentially, which can result in improved query performance when applied appropriately.

A raw value forward index can be configured for a table by configuring the table config, as shown below:

hashtag
Dictionary encoded vs raw value

When working out whether a column should use dictionary encoded or raw value encoding, the following comparison table may help:

Dictionary
Raw Value

Provides compression when low to medium cardinality.

Eliminates padding overhead

Allows for indexing (esp inv index).

No inv index (only JSON/Text/FST index)

Adds one level of dereferencing, so can increase disk seeks

Eliminates additional dereferencing, so good when all docs of interest are contiguous

For Strings, adds padding to make all values equal length in the dictionary

Chunk de-compression overhead with docs selected don't have spatial locality

hashtag
Disabling the forward index

Traditionally the forward index has been a mandatory index for all columns in the on-disk segment file format.

However, certain columns may only be used as a filter in the WHERE clause for all queries. In such scenarios the forward index is not necessary as essentially other indexes and structures in the segments can provide the required SQL query functionality. Forward index just takes up extra storage space for such scenarios and can ideally be freed up.

Thus, to provide users an option to save storage space, a knob to disable the forward index is now available.

Forward index on one or more columns(s) in your Pinot table can be disabled with the following limitations:

  • Only supported for immutable (offline) segments.

  • If the column has a range index then the column must be of single-value type and use range index version 2

  • MV columns with duplicates within a row will lose the duplicated entries on forward index regeneration. The ordering of data with an MV row may also change on regeneration. A backfill is required in such scenarios (to preserve duplicates or ordering).

  • If forward index regeneration support on reload (i.e. re-enabling the forward index for a forward index disabled column) is required then the dictionary and inverted index must be enabled on that particular column.

Sorted columns will allow the forward index to be disabled, but this operation will be treated as a no-op and the index (which acts as both a forward index and inverted index) will be created.

To disable the forward index for a given column the fieldConfigList can be modified within the table config, as shown below:

A table reload operation must be performed for the above config to take effect. Enabling / disabling other indexes on the column can be done via the usual table config options.

The forward index can also be regenerated for a column where it is disabled by removing the property forwardIndexDisabled from the fieldConfigList properties bucket and reloading the segment. The forward index can only be regenerated if the dictionary and inverted index have been enabled for the column. If either have been disabled then the only way to get the forward index back is to regenerate the segments via the offline jobs and re-push / refresh the data.

triangle-exclamation

Warning:

For multi-value (MV) columns the following invariants cannot be maintained after regenerating the forward index for a forward index disabled column:

  • Ordering guarantees of the MV values within a row

  • If entries within an MV row are duplicated, the duplicates will be lost. Please regenerate the segments via your offline jobs and re-push / refresh the data to get back the original MV data with duplicates.

We will work on removing the second invariant in the future.

Examples of queries which will fail after disabling the forward index for an example column, columnA, can be found below:

hashtag
Select

Forward index disabled columns cannot be present in the SELECT clause even if filters are added on it.

hashtag
Group By Order By

Forward index disabled columns cannot be present in the GROUP BY and ORDER BY clauses. They also cannot be part of the HAVING clause.

hashtag
Aggregation Queries

A subset of the aggregation functions do work when the forward index is disabled such as MIN, MAX, DISTINCTCOUNT, DISTINCTCOUNTHLL and more. Some of the other aggregation functions will not work such as the below:

hashtag
Distinct

Forward index disabled columns cannot be present in the SELECT DISTINCT clause.

hashtag
Range Queries

To run queries on single-value columns where the filter clause contains operators such as >, <, >=, <= a version 2 range index must be present. Without the range index such queries will fail as shown below:

Dictionary encoded forward index
Sorted forward index
{
    "tableIndexConfig": {
        "sortedColumn": [
            "column_name"
        ],
        ...
    }
}
$ grep memberId <segment_name>/v3/metadata.properties | grep isSorted
column.memberId.isSorted = true
curl -X GET \
  "http://localhost:9000/segments/baseballStats/metadata?columns=playerID&columns=teamID" \
  -H "accept: application/json" 2>/dev/null | \
  jq -c  '.[] | . as $parent |  
          .columns[] | 
          [$parent .segmentName, .columnName, .sorted]'
["baseballStats_OFFLINE_0","teamID",false]
["baseballStats_OFFLINE_0","playerID",false]
{
    "tableIndexConfig": {
        "noDictionaryColumns": [
            "column_name",
            ...
        ],
        ...
    }
}
"fieldConfigList":[
  {
     "name":"columnA",
     "encodingType":"DICTIONARY",
     "indexTypes":["INVERTED"]
     "properties":[{"forwardIndexDisabled":"true"}]
  }
]
SELECT columnA
FROM myTable
    WHERE columnA = 10
SELECT *
FROM myTable
SELECT SUM(columnB)
FROM myTable
GROUP BY columnA
SELECT SUM(columnB), columnA
FROM myTable
GROUP BY columnA
ORDER BY columnA
SELECT MIN(columnA)
FROM myTable
GROUP BY columnB
HAVING MIN(columnA) > 100
ORDER BY columnB
SELECT SUM(columnA), AVG(columnA)
FROM myTable
SELECT MAX(ADD(columnA, columnB))
FROM myTable
SELECT DISTINCT columnA
FROM myTable
SELECT columnB
FROM myTable
    WHERE columnA > 1000

Timestamp Index

Speed up your time query with different granularities

circle-info

This feature is supported from Pinot 0.11+.

Pinot introduces the TIMESTAMP data type from Pinot 0.8.0 release. This data type stores value as millisecond epoch long value internally.

Typically for analytics queries, users won't need this low level granularity, scanning the data and time value conversion can be costly for the big size of data.

A common query pattern for timestamp columns is filtering on a time range and then group by with different time granularities(days/month/etc).

The existing implementation requires the query executor to extract values, apply the transform functions then do filter/groupBy, no leverage on the dictionary or index.

Hence the inspiration of TIMESTAMP INDEX, which is used to improve the query performance for range query and group by queries on TIMESTAMP columns.

hashtag
Supported data type

TIMESTAMP index can only be created on TIMESTAMP data type.

hashtag
Timestamp Index

Users can configure the most useful granularities for a Timestamp data type column.

  1. Pinot will pre-generate one column per time granularity with forward index and range index. The naming convention is $${ts_column_name}$${ts_granularity}, e.g. Timestamp column ts with granularities DAY, MONTH will have two extra columns generated: $ts$DAY and $ts$MONTH.

Example query usage:

Some preliminary benchmark shows the query perf over 2.7 billion records improved from 45 secs to 4.2 secs

vs.

hashtag
Usage

Timestamp index is configured per column basis inside the fieldConfigList section in table config.

Users need to specify TIMESTAMP as part of the indexTypes. Then in the field timestampConfig, specify the granularities that you want to index.

Sample config:

Query overwrite for predicate and selection/group by: 2.1 GROUP BY: functions like
dateTrunc('DAY', ts)
will be translated to use the underly column
$ts$DAY
to fetch data. 2.2 PREDICATE: range index is auto-built for all granularity columns.
Without Timestamp Index
With Timestamp Index
select count(*), 
       datetrunc('WEEK', ts) as tsWeek 
from airlineStats 
WHERE tsWeek > fromDateTime('2014-01-16', 'yyyy-MM-dd') 
group by tsWeek
limit 10
select dateTrunc('YEAR', event_time) as y, 
       dateTrunc('MONTH', event_time) as m,  
       sum(pull_request_commits) 
from githubEvents 
group by y, m 
limit 1000
Option(timeoutMs=3000000)
{
  "tableName": "airlineStats",
  "tableType": "OFFLINE",
  "segmentsConfig": {
    "timeColumnName": "DaysSinceEpoch",
    "timeType": "DAYS",
    "segmentPushType": "APPEND",
    "segmentAssignmentStrategy": "BalanceNumSegmentAssignmentStrategy",
    "replication": "1"
  },
  "tenants": {},
  "fieldConfigList": [
    {
      "name": "ts",
      "encodingType": "DICTIONARY",
      "indexTypes": ["TIMESTAMP"],
      "timestampConfig": {
        "granularities": [
          "DAY",
          "WEEK",
          "MONTH"
        ]
      }
    }
  ],
  "tableIndexConfig": {
    "loadMode": "MMAP"
  },
  "metadata": {
    "customConfigs": {}
  },
  "ingestionConfig": {}
}

Geospatial

This page talks about geospatial support in Pinot.

Pinot supports SQL/MM geospatial data and is compliant with the Open Geospatial Consortium’s (OGC) OpenGIS Specificationsarrow-up-right. This includes:

  • Geospatial data types, such as point, line and polygon;

  • Geospatial functions, for querying of spatial properties and relationships.

  • Geospatial indexing, used for efficient processing of spatial operations

hashtag
Geospatial data types

Geospatial data types abstract and encapsulate spatial structures such as boundary and dimension. In many respects, spatial data types can be understood simply as shapes. Pinot supports the Well-Known Text (WKT) and Well-Known Binary (WKB) form of geospatial objects, for example:

  • POINT (0, 0)

  • LINESTRING (0 0, 1 1, 2 1, 2 2)

  • POLYGON (0 0, 10 0, 10 10, 0 10, 0 0),(1 1, 1 2, 2 2, 2 1, 1 1)

hashtag
Geometry vs Geography

It is common to have data in which the coordinates are geographics or latitude/longitude. Unlike coordinates in Mercator or UTM, geographic coordinates are not Cartesian coordinates.

  • Geographic coordinates do not represent a linear distance from an origin as plotted on a plane. Rather, these spherical coordinates describe angular coordinates on a globe.

  • Spherical coordinates specify a point by the angle of rotation from a reference meridian (longitude), and the angle from the equator (latitude).

You can treat geographic coordinates as approximate Cartesian coordinates and continue to do spatial calculations. However, measurements of distance, length and area will be nonsensical. Since spherical coordinates measure angular distance, the units are in degrees.

Pinot supports both geometry and geography types, which can be constructed by the corresponding functions as shown in . And for the geography types, the measurement functions such as ST_Distance and ST_Area calculate the spherical distance and area on earth respectively.

hashtag
Geospatial functions

For manipulating geospatial data, Pinot provides a set of functions for analyzing geometric components, determining spatial relationships, and manipulating geometries. In particular, geospatial functions that begin with the ST_ prefix support the SQL/MM specification.

Following geospatial functions are available out of the box in Pinot-

hashtag
Aggregations

This aggregate function returns a MULTI geometry or NON-MULTI geometry from a set of geometries. it ignores NULL geometries.

hashtag
Constructors

  • Returns a geometry type object from WKT representation, with the optional spatial system reference.

  • Returns a geometry type object from WKB representation.

  • Returns a geometry type point object with the given coordinate values.

hashtag
Measurements

  • ST_Area(Geometry/Geography g) → double For geometry type, it returns the 2D Euclidean area of a geometry. For geography, returns the area of a polygon or multi-polygon in square meters using a spherical model for Earth.

  • For geometry type, returns the 2-dimensional cartesian minimum distance (based on spatial ref) between two geometries in projected units. For geography, returns the great-circle distance in meters between two SphericalGeography points. Note that g1, g2 shall have the same type.

hashtag
Outputs

  • Returns the WKB representation of the geometry.

  • Returns the WKT representation of the geometry/geography.

hashtag
Conversion

  • Converts a Geometry object to a spherical geography object.

  • Converts a spherical geographical object to a Geometry object.

hashtag
Relationship

  • Returns true if and only if no points of the second geometry/geography lie in the exterior of the first geometry/geography, and at least one point of the interior of the first geometry lies in the interior of the second geometry. Warning: ST_Contains on Geography only give close approximation

  • ST_Equals(Geometry, Geometry) → boolean Returns true if the given geometries represent the same geometry/geography.

  • ST_Within(Geometry, Geometry) → boolean

hashtag
Geospatial index

Geospatial functions are typically expensive to evaluate, and using geoindex can greatly accelerate the query evaluation. Geoindexing in Pinot is based on Uber’s , a hexagon-based hierarchical gridding.

A given geospatial location (longitude, latitude) can map to one hexagon (represented as H3Index). And its neighbors in H3 can be approximated by a ring of hexagons. To quickly identify the distance between any given two geospatial locations, we can convert the two locations in the H3Index, and then check the H3 distance between them. H3 distance is measured as the number of hexagons.

For example, in the diagram below, the red hexagons are within the 1 distance of the central hexagon. The size of the hexagon is determined by the resolution of the indexing. Please check this table for the level of and the corresponding precision (measured in km).

hashtag
How to use Geoindex

To use the geoindex, first declare the geolocation field as bytes in the schema, as in the example of the .

Note the use of transformFunction that converts the created point into SphericalGeography format, which is needed by the ST_Distance function.

Next, declare the geospatial index in the :

The query below will use the geoindex to filter the Starbucks stores within 5km of the given point in the bay area.

hashtag
How Geoindex works

Geoindex in Pinot accelerates the query evaluation without compromising the correctness of the query result. Currently, geoindex supports the ST_Distance function used in the range predicates in the WHERE clause, as shown in the query example in the previous section.

At the high level, geoindex is used for retrieving the records within the nearby hexagons of the given location, and then use ST_Distance to accurately filter the matched results.

As in the example diagram above, if we want to find all relevant points within a given distance at San Francisco (represented in the area within the red circle), then the algorithm with geoindex works as the following:

  • Find the H3 distance x that contains the range (i.e. red circle)

  • For the points within the H3 distance (i.e. covered by the hexagons within ), we can directly take those points without filtering

  • For the points falling into the H3 distance (i.e. in the hexagons of kRing(x)

  • MULTIPOINT (0 0, 1 2)

  • MULTILINESTRING ((0 0, 1 1, 1 2), (2 3, 3 2, 5 4))

  • MULTIPOLYGON (((0 0, 4 0, 4 4, 0 4, 0 0), (1 1, 2 1, 2 2, 1 2, 1 1)), ((-1 -1, -1 -2, -2 -2, -2 -1, -1 -1)))

  • GEOMETRYCOLLECTION(POINT(2 0),POLYGON((0 0, 1 0, 1 1, 0 1, 0 0)))

  • Returns a geometry type polygon object from .

  • Creates a geography instance from a

  • Return a specified geography value from .

  • Returns the type of the geometry as a string. e.g.:
    ST_Linestring
    ,
    ST_Polygon
    ,
    ST_MultiPolygon
    etc.
    Returns true if first geometry is completely inside second geometry.
    ), we do filtering on them by evaluating the condition
    ST_Distance(loc1, loc2) < x
    section
    ST_Union(geometry[] g1_array) → Geometry
    ST_GeomFromText(String wkt) → Geometry
    ST_GeomFromWKB(bytes wkb) → Geometry
    ST_Point(double x, double y) → Point
    ST_Distance(Geometry/Geography g1, Geometry/Geography g2) → double
    ST_GeometryType(Geometry g) → String
    ST_AsBinary(Geometry/Geography g) → bytes
    ST_AsText(Geometry/Geography g) → string
    toSphericalGeography(Geometry g) → Geography
    toGeometry(Geography g) → Geometry
    ST_Contains(Geometry/Geography, Geometry/Geography) → boolean
    H3arrow-up-right
    resolutionsarrow-up-right
    QuickStart examplearrow-up-right
    table config
    kRing(x)arrow-up-right
    Hexagonal grid in H3
    Geoindex example
    geoindex schema
    {
          "dataType": "BYTES",
          "name": "location_st_point",
          "transformFunction": "toSphericalGeography(stPoint(lon,lat))"
    }
    geoindex tableConfig
    {
      "fieldConfigList": [
      {
        "name": "location_st_point",
        "encodingType":"RAW",
        "indexType":"H3",
        "properties": {
        "resolutions": "5"
         }
        }
      ],
      "tableIndexConfig": {
        "loadMode": "MMAP",
        "noDictionaryColumns": [
          "location_st_point"
        ]
      },
    }
    SELECT address, ST_DISTANCE(location_st_point, ST_Point(-122, 37, 1))
    FROM starbucksStores
    WHERE ST_DISTANCE(location_st_point, ST_Point(-122, 37, 1)) < 5000
    limit 1000
    ST_Polygon(String wkt) → Polygon
    WKT representationarrow-up-right
    ST_GeogFromWKB(bytes wkb) → Geography
    Well-Known Binary geometry representation (WKB)arrow-up-right
    ST_GeogFromText(String wkt) → Geography
    Well-Known Text representation or extended (WKT)arrow-up-right

    JSON Index

    JSON index can be applied to JSON string columns to accelerate the value lookup and filtering for the column.

    hashtag
    When to use JSON index

    JSON string can be used to represent the array, map, nested field without forcing a fixed schema. It is very flexible, but the flexibility comes with a cost - filtering on JSON string columns is very expensive.

    Suppose we have some JSON records similar to the following sample record stored in the person column:

    Without an index, in order to look up a key and filter records based on the value, we need to scan and reconstruct the JSON object from the JSON string for every record, look up the key and then compare the value.

    For example, in order to find all persons whose name is "adam", the query will look like:

    JSON index is designed to accelerate the filtering on JSON string columns without scanning and reconstructing all the JSON objects.

    hashtag
    Configure JSON index

    To enable the JSON index, set the following config in the table config:

    hashtag
    Config since release 0.12.0:

    Config Key
    Desciprtion
    Type
    Default

    hashtag
    Example:

    With the following JSON document:

    With the default setting, we will flatten the document into the following records:

    With maxLevels set to 1:

    With maxLevels set to 2:

    With excludeArray set to true:

    With disableCrossArrayUnnest set to true:

    With includePaths set to ["$.name", "$.addresses[*].country"]:

    With excludePaths set to ["$.age", "$.addresses[*].number"]:

    With excludeFields set to ["age", "street"]:

    hashtag
    Legacy config before release 0.12.0:

    The legacy config has the same behavior as the default settings in the new config.

    Note that JSON index can only be applied to STRING/JSON columns whose values are JSON strings.

    circle-info

    When you're using a JSON index, we would recommend that you add the indexed column to the noDictionaryColumns columns list to reduce unnecessary storage overhead.

    For instructions on that config property, see the documentation.

    hashtag
    How to use JSON index

    JSON index can be used via the JSON_MATCH predicate: JSON_MATCH(<column>, '<filterExpression>'). For example, to find all persons whose name is "adam", the query will look like:

    Note that the quotes within the filter expression need to be escaped.

    circle-info

    In release 0.7.1, we use the old syntax for filterExpression: 'name=''adam'''

    hashtag
    Supported filter expressions

    hashtag
    Simple key lookup

    Find all persons whose name is "adam":

    circle-info

    In release 0.7.1, we use the old syntax for filterExpression: 'name=''adam'''

    hashtag
    Chained key lookup

    Find all persons who have an address (one of the addresses) with number 112:

    circle-info

    In release 0.7.1, we use the old syntax for filterExpression: 'addresses.number=112'

    hashtag
    Nested filter expression

    Find all persons whose name is "adam" and also have an address (one of the addresses) with number 112:

    circle-info

    In release 0.7.1, we use the old syntax for filterExpression: 'name=''adam'' AND addresses.number=112'

    hashtag
    Array access

    Find all persons whose first address has number 112:

    circle-info

    In release 0.7.1, we use the old syntax for filterExpression: '"addresses[0].number"=112'

    hashtag
    Existence check

    Find all persons who have a phone field within the JSON:

    circle-info

    In release 0.7.1, we use the old syntax for filterExpression: 'phone IS NOT NULL'

    Find all persons whose first address does not contain floor field within the JSON:

    circle-info

    In release 0.7.1, we use the old syntax for filterExpression: '"addresses[0].floor" IS NULL'

    hashtag
    JSON context is maintained

    The JSON context is maintained for object elements within an array, i.e. the filter won't cross-match different objects in the array.

    To find all persons who live on "main st" in "ca":

    This query won't match "adam" because none of his addresses matches both the street and the country.

    If JSON context is not desired, use multiple separate JSON_MATCH predicates. E.g. to find all persons who have addresses on "main st" and have addressed in "ca" (doesn't have to be the same address):

    This query will match "adam" because one of his addresses matches the street and another one matches the country.

    Note that the array index is maintained as a separate entry within the element, so in order to query different elements within an array, multiple JSON_MATCH predicates are required. E.g. to find all persons who have first address on "main st" and second address on "second st":

    hashtag
    Supported JSON values

    hashtag
    Object

    See examples above.

    hashtag
    Array

    To find the records with array element "item1" in "arrayCol":

    To find the records with second array element "item2" in "arrayCol":

    hashtag
    Value

    To find the records with value 123 in "valueCol":

    hashtag
    Null

    To find the records with null in "nullableCol":

    circle-exclamation

    In release 0.7.1, json string must be object (cannot be null, value or array); multi-dimensional array is not supported.

    hashtag
    Limitations

    1. The key (left-hand side) of the filter expression must be the leaf level of the JSON object, e.g. "$.addresses[*]"='main st' won't work.

    boolean

    false (calculate unique combination of all elements)

    includePaths

    Only include the given paths, e.g. "$.a.b", "$.a.c[*]" (mutual exclusive with excludePaths). Paths under the included paths will be included, e.g. "$.a.b.c" will be included when "$.a.b" is configured to be included.

    Set<String>

    null (include all paths)

    excludePaths

    Exclude the given paths, e.g. "$.a.b", "$.a.c[*]" (mutual exclusive with includePaths). Paths under the excluded paths will also be excluded, e.g. "$.a.b.c" will be excluded when "$.a.b" is configured to be excluded.

    Set<String>

    null (include all paths)

    excludeFields

    Exclude the given fields, e.g. "b", "c", even if it is under the included paths.

    Set<String>

    null (include all fields)

    maxLevels

    Max levels to flatten the json object (array is also counted as one level)

    int

    -1 (unlimited)

    excludeArray

    Whether to exclude array when flattening the object

    boolean

    false (include array)

    disableCrossArrayUnnest

    Raw value forward index

    Whether to not unnest multiple arrays (unique combination of all elements)

    {
      "name": "adam",
      "age": 30,
      "country": "us",
      "addresses":
      [
        {
          "number" : 112,
          "street" : "main st",
          "country" : "us"
        },
        {
          "number" : 2,
          "street" : "second st",
          "country" : "us"
        },
        {
          "number" : 3,
          "street" : "third st",
          "country" : "ca"
        }
      ]
    }
    SELECT * 
    FROM mytable 
    WHERE JSON_EXTRACT_SCALAR(person, '$.name', 'STRING') = 'adam'
    {
      "tableIndexConfig": {
        "jsonIndexConfigs": {
          "person": {
            "maxLevels": 2,
            "excludeArray": false,
            "disableCrossArrayUnnest": true,
            "includePaths": null,
            "excludePaths": null,
            "excludeFields": null
          },
          ...
        },
        ...
      }
    }
    {
      "name": "adam",
      "age": 20,
      "addresses": [
        {
          "country": "us",
          "street": "main st",
          "number": 1
        },
        {
          "country": "ca",
          "street": "second st",
          "number": 2
        }
      ],
      "skills": [
        "english",
        "programming"
      ]
    }
    {
      "name": "adam",
      "age": 20,
      "addresses[0].country": "us",
      "addresses[0].street": "main st",
      "addresses[0].number": 1,
      "skills[0]": "english"
    },
    {
      "name": "adam",
      "age": 20,
      "addresses[0].country": "us",
      "addresses[0].street": "main st",
      "addresses[0].number": 1,
      "skills[1]": "programming"
    },
    {
      "name": "adam",
      "age": 20,
      "addresses[1].country": "ca",
      "addresses[1].street": "second st",
      "addresses[1].number": 2,
      "skills[0]": "english"
    },
    {
      "name": "adam",
      "age": 20,
      "addresses[1].country": "ca",
      "addresses[1].street": "second st",
      "addresses[1].number": 2,
      "skills[1]": "programming"
    }
    {
      "name": "adam",
      "age": 20son
    }
    {
      "name": "adam",
      "age": 20,
      "skills[0]": "english"
    },
    {
      "name": "adam",
      "age": 20,
      "skills[1]": "programming"
    }
    {
      "name": "adam",
      "age": 20
    }
    {
      "name": "adam",
      "age": 20,
      "addresses[0].country": "us",
      "addresses[0].street": "main st",
      "addresses[0].number": 1
    },
    {
      "name": "adam",
      "age": 20,
      "addresses[0].country": "us",
      "addresses[0].street": "main st",
      "addresses[0].number": 1
    },
    {
      "name": "adam",
      "age": 20,
      "skills[0]": "english"
    },
    {
      "name": "adam",
      "age": 20,
      "skills[1]": "programming"
    }
    {
      "name": "adam",
      "addresses[0].country": "us"
    },
    {
      "name": "adam",
      "addresses[1].country": "ca"
    }
    {
      "name": "adam",
      "addresses[0].country": "us",
      "addresses[0].street": "main st",
      "skills[0]": "english"
    },
    {
      "name": "adam",
      "addresses[0].country": "us",
      "addresses[0].street": "main st",
      "skills[1]": "programming"
    },
    {
      "name": "adam",
      "addresses[1].country": "ca",
      "addresses[1].street": "second st",
      "skills[0]": "english"
    },
    {
      "name": "adam",
      "addresses[1].country": "ca",
      "addresses[1].street": "second st",
      "skills[1]": "programming"
    }
    {
      "name": "adam",
      "addresses[0].country": "us",
      "addresses[0].number": 1,
      "skills[0]": "english"
    },
    {
      "name": "adam",
      "addresses[0].country": "us",
      "addresses[0].number": 1,
      "skills[1]": "programming"
    },
    {
      "name": "adam",
      "addresses[1].country": "ca",
      "addresses[1].number": 2,
      "skills[0]": "english"
    },
    {
      "name": "adam",
      "addresses[1].country": "ca",
      "addresses[1].number": 2,
      "skills[1]": "programming"
    }
    {
      "tableIndexConfig": {        
        "jsonIndexColumns": [
          "person",
          ...
        ],
        ...
      }
    }
    SELECT ... 
    FROM mytable 
    WHERE JSON_MATCH(person, '"$.name"=''adam''')
    SELECT ... 
    FROM mytable 
    WHERE JSON_MATCH(person, '"$.name"=''adam''')
    SELECT ... 
    FROM mytable 
    WHERE JSON_MATCH(person, '"$.addresses[*].number"=112')
    SELECT ... 
    FROM mytable 
    WHERE JSON_MATCH(person, '"$.name"=''adam'' AND "$.addresses[*].number"=112')
    SELECT ... 
    FROM mytable 
    WHERE JSON_MATCH(person, '"$.addresses[0].number"=112')
    SELECT ... 
    FROM mytable 
    WHERE JSON_MATCH(person, '"$.phone" IS NOT NULL')
    SELECT ... 
    FROM mytable
    WHERE JSON_MATCH(person, '"$.addresses[0].floor" IS NULL')
    SELECT ... 
    FROM mytable 
    WHERE JSON_MATCH(person, '"$.addresses[*].street"=''main st'' AND "$.addresses[*].country"=''ca''')
    SELECT ... 
    FROM mytable 
    WHERE JSON_MATCH(person, '"$.addresses[*].street"=''main st''') AND JSON_MATCH(person, '"$.addresses[*].country"=''ca''')
    SELECT ... 
    FROM mytable 
    WHERE JSON_MATCH(person, '"$.addresses[0].street"=''main st''') AND JSON_MATCH(person, '"$.addresses[1].street"=''second st''')
    ["item1", "item2", "item3"]
    SELECT ... 
    FROM mytable 
    WHERE JSON_MATCH(arrayCol, '"$[*]"=''item1''')
    SELECT ... 
    FROM mytable 
    WHERE JSON_MATCH(arrayCol, '"$[1]"=''item2''')
    123
    1.23
    "Hello World"
    SELECT ... 
    FROM mytable 
    WHERE JSON_MATCH(valueCol, '"$"=123')
    null
    SELECT ... 
    FROM mytable 
    WHERE JSON_MATCH(nullableCol, '"$" IS NULL')

    Star-Tree Index

    Unlike other index techniques which work on single column, the Star-Tree index is built on multiple columns, and utilizes pre-aggregated results to significantly reduce the number of values to be processed, thus improving query performance.

    One of the biggest challenges in realtime OLAP systems is achieving and maintaining tight SLAs on latency and throughput on large data sets. Existing techniques such as sorted index or inverted index help improve query latencies, but speed-ups are still limited by the number of documents that need to be processed to compute results. On the other hand, pre-aggregating the results ensures a constant upper bound on query latencies, but can lead to storage space explosion.

    Here we introduce star-tree index to utilize the pre-aggregated documents in a smart way to achieve low query latencies but also use the storage space efficiently for aggregation/group-by queries.

    hashtag
    Existing solutions

    Consider the following data set as an example to discuss the existing approaches:

    Country
    Browser
    Locale
    Impressions

    hashtag
    Sorted index

    In this approach, data is sorted on a primary key, which is likely to appear as filter in most queries in the query set.

    This reduces the time to search the documents for a given primary key value from linear scan O(n) to binary search O(logn), and also keeps good locality for the documents selected.

    While this is a good improvement over linear scan, there are still a few issues with this approach:

    • While sorting on one column does not require additional space, sorting on additional columns would require additional storage space to re-index the records for the various sort orders.

    • While search time is reduced from O(n) to O(logn), overall latency is still a function of total number of documents need to be processed to answer a query.

    hashtag
    Inverted index

    In this approach, for each value of a given column, we maintain a list of document id’s where this value appears.

    Below are the inverted indexes for columns ‘Browser’ and ‘Locale’ for our example data set:

    Browser
    Doc Id
    Locale
    Doc Id

    For example, if we want to get all the documents where ‘Browser’ is ‘Firefox’, we can look up the inverted index for ‘Browser’ and identify that it appears in documents [1, 5, 6].

    Using an inverted index, we can reduce the search time to constant time O(1). The query latency, however, is still a function of the selectivity of the query, i.e. it increases with the number of documents that need to be processed to answer the query.

    hashtag
    Pre-aggregation

    In this technique, we pre-compute the answer for a given query set upfront.

    In the example below, we have pre-aggregated the total impressions for each country:

    Country
    Impressions

    With this approach, answering queries about total impressions for a country is a value lookup, because we have eliminated the need to process a large number of documents. However, to be able to answer queries that have multiple predicates means we would need to pre-aggregate for various combinations of different dimensions, which leads to an exponential explosion in storage space.

    hashtag
    Star-tree solution

    On one end of the spectrum we have indexing techniques that improve search times with a limited increase in space, but do not guarantee a hard upper bound on query latencies. On the other end of the spectrum we have pre-aggregation techniques that offer hard upper bound on query latencies, but suffer from exponential explosion of storage space

    Space-Time Trade Off Between Different Techniques

    The Star-Tree data structure offers a configurable trade-off between space and time and lets us achieve hard upper bound for query latencies for a given use case. In the following sections we will define the Star-Tree data structure, and explains how Pinot uses it to achieve low latencies with high throughput.

    hashtag
    Definitions

    Tree structure

    Star-tree is a tree data structure that consists of the following properties:

    Star-tree Structure

    • Root Node (Orange): Single root node, from which the rest of the tree can be traversed.

    • Leaf Node (Blue): A leaf node can containing at most T records, where T is configurable.

    • Non-leaf Node (Green): Nodes with more than T records are further split into children nodes.

    Node properties

    The properties stored in each node are as follows:

    • Dimension: The dimension that the node is split on

    • Start/End Document Id: The range of documents this node points to

    • Aggregated Document Id: One single document that is the aggregation result of all documents pointed by this node

    hashtag
    Index generation

    Star-tree index is generated in the following steps:

    • The data is first projected as per the dimensionsSplitOrder. Only the dimensions from the split order are reserved, others are dropped. For each unique combination of reserved dimensions, metrics are aggregated per configuration. The aggregated documents are written to a file and served as the initial Star-Tree documents (separate from the original documents).

    • Sort the Star-Tree documents based on the dimensionsSplitOrder. It is primary-sorted on the first dimension in this list, and then secondary sorted on the rest of the dimensions based on their order in the list. Each node in the tree points to a range in the sorted documents.

    hashtag
    Aggregation

    Aggregation is configured as a pair of aggregation functions and the column to apply the aggregation.

    All types of aggregation function that have a bounded-sized intermediate result are supported.

    Supported functions

    • COUNT

    • MIN

    • MAX

    Unsupported functions

    • DISTINCT_COUNT

      • Intermediate result Set is unbounded

    • SEGMENT_PARTITIONED_DISTINCT_COUNT:

    Functions to be supported

    • DISTINCT_COUNT_THETA_SKETCH

    • ST_UNION

    hashtag
    Index generation configuration

    Multiple index generation configurations can be provided to generate multiple star-trees. Each configuration should contain the following properties:

    • dimensionsSplitOrder: An ordered list of dimension names can be specified to configure the split order. Only the dimensions in this list are reserved in the aggregated documents. The nodes will be split based on the order of this list. For example, split at level i is performed on the values of dimension at index i in the list.

      • The star-tree dimension does not have to be a dimension column in the table, it can also be time column, date-time column, or metric column if necessary.

    hashtag
    Default index generation configuration

    A default star-tree index can be added to a segment by using the boolean config enableDefaultStarTree under the tableIndexConfig.

    A default star-tree will have the following configuration:

    • All dictionary-encoded single-value dimensions with cardinality smaller or equal to a threshold (10000) will be included in the dimensionsSplitOrder, sorted by their cardinality in descending order.

    • All dictionary-encoded Time/DateTime columns will be appended to the _dimensionsSplitOrder _following the dimensions, sorted by their cardinality in descending order. Here we assume that time columns will be included in most queries as the range filter column and/or the group by column, so for better performance, we always include them as the last elements in the dimensionsSplitOrder.

    hashtag
    Example

    For our example data set, in order to solve the following query efficiently:

    We may config the star-tree index as follows:

    The star-tree and documents should be something like below:

    hashtag
    Tree structure

    The values in the parentheses are the aggregated sum of Impressions for all the documents under the node.

    Star-tree documents

    Country
    Browser
    Locale
    SUM__Impressions

    hashtag
    Query execution

    For query execution, the idea is to first check metadata to determine whether the query can be solved with the Star-Tree documents, then traverse the Star-Tree to identify documents that satisfy all the predicates. After applying any remaining predicates that were missed while traversing the Star-Tree to the identified documents, apply aggregation/group-by on the qualified documents.

    The algorithm to traverse the tree can be described as follows:

    • Start from root node.

    • For each level, what child node(s) to select depends on whether there are any predicates/group-by on the split dimension for the level in the query.

      • If there is no predicate or group-by on the split dimension, select the Star-Node if exists, or all child nodes to traverse further.

    circle-exclamation

    There is a known bug in Star-Tree which can mistakenly apply Star-Tree index to queries with OR operator on top of nested AND or NOT operator in the filter that cannot be solved with Star-Tree, and cause wrong results. E.g. SELECT COUNT(*) FROM myTable WHERE (A = 1 AND B = 2) OR A = 2. This bug affects release 0.9.0, 0.9.1, 0.9.2, 0.9.3, 0.10.0.

    es

    300

    MX

    Safari

    en

    100

    USA

    Chrome

    en

    600

    USA

    Firefox

    es

    200

    USA

    Firefox

    en

    400

    Star-Node (Yellow): Non-leaf nodes can also have a special child node called the Star-Node. This node contains the pre-aggregated records after removing the dimension on which the data was split for this level.

  • Dimensions Split Order ([D1, D2]): Nodes at a given level in the tree are split into children nodes on all values of a particular dimension. The dimensions split order is an ordered list of dimensions that is used to determine the dimension to split on for a given level in the tree.

  • The tree structure can be created recursively (starting at root node) as follows:
    • If a node has more than T records, it is split into multiple children nodes, one for each value of the dimension in the split order corresponding to current level in the tree.

    • A Star-Node can be created (per configuration) for the current node, by dropping the dimension being split on, and aggregating the metrics for rows containing dimensions with identical values. These aggregated documents are appended to the end of the Star-Tree documents.

      If there is only one value for the current dimension, Star-Node won’t be created because the documents under the Star-Node are identical to the single node.

  • The above step is repeated recursively until there are no more nodes to split.

  • Multiple Star-Trees can be generated based on different configurations (dimensionsSplitOrder, aggregations, T)

  • SUM
  • AVG

  • MIN_MAX_RANGE

  • DISTINCT_COUNT_HLL

  • PERCENTILE_EST

  • PERCENTILE_TDIGEST

  • DISTINCT_COUNT_BITMAP

    • NOTE: The intermediate result RoaringBitmap is not bounded-sized, use carefully on high cardinality columns)

  • Intermediate result Set is unbounded

  • PERCENTILE

    • Intermediate result List is unbounded

  • The star-tree dimension column should be dictionary encoded in order to generate the star-tree index.
  • All columns in the filter and group-by clause of a query should be included in this list in order to use the star-tree index.

  • skipStarNodeCreationForDimensions (Optional, default empty): A list of dimension names for which to not create the Star-Node.

  • functionColumnPairs: A list of aggregation function and column pairs (split by double underscore “__”). E.g. SUM__Impressions (SUM of column Impressions) or COUNT__*.

    • The column within the function-column pair can be either dictionary encoded or raw.

    • All aggregations of a query should be included in this list in order to use the star-tree index.

  • maxLeafRecords (Optional, default 10000): The threshold T to determine whether to further split each node.

  • Include COUNT(*) and SUM for all numeric metrics in the functionColumnPairs.
  • Use default maxLeafRecords (10000).

  • en

    100

    MX

    Safari

    es

    300

    USA

    Chrome

    en

    600

    USA

    Firefox

    en

    400

    USA

    Firefox

    es

    200

    CA

    *

    en

    400

    CA

    *

    fr

    200

    CA

    *

    *

    600

    MX

    Safari

    *

    400

    USA

    Firefox

    *

    600

    USA

    *

    en

    1000

    USA

    *

    es

    200

    USA

    *

    *

    1200

    *

    Chrome

    en

    1000

    *

    Firefox

    en

    400

    *

    Firefox

    es

    200

    *

    Firefox

    fr

    200

    *

    Firefox

    *

    800

    *

    Safari

    en

    100

    *

    Safari

    es

    300

    *

    Safari

    *

    400

    *

    *

    en

    1500

    *

    *

    es

    500

    *

    *

    fr

    200

    *

    *

    *

    2200

    If there are predicate(s) on the split dimension, select the child node(s) that satisfy the predicate(s).

  • If there is no predicate, but there is a group-by on the split dimension, select all child nodes except Star-Node.

  • Recursively repeat the previous step until all leaf nodes are reached, or all predicates are satisfied.

  • Collect all the documents pointed by the selected nodes.

    • If all predicates and group-by's are satisfied, pick the single aggregated document from each selected node.

    • Otherwise, collect all the documents in the document range from each selected node.note

  • CA

    Chrome

    en

    400

    CA

    Firefox

    fr

    200

    MX

    Firefox

    1,5,6

    Chrome

    0,4

    Safari

    2,3

    en

    0,3,4,6

    es

    2,5

    fr

    1

    CA

    600

    MX

    400

    USA

    1200

    CA

    Chrome

    en

    400

    CA

    Firefox

    fr

    200

    MX

    Safari

    Safari

    Text search support

    This page talks about support for text search functionality in Pinot.

    hashtag
    Why do we need text search?

    Pinot supports super-fast query processing through its indexes on non-BLOB like columns. Queries with exact match filters are run efficiently through a combination of dictionary encoding, inverted index, and sorted index.

    It would be useful for a query like the following:

    This query does exact matches on two columns of type STRING and INT respectively.

    For arbitrary text data that falls into the BLOB/CLOB territory, we need more than exact matches. Users are interested in doing regex, phrase, fuzzy queries on BLOB like data. Before 0.3.0, one had to use to achieve this. However, this was scan based which was not performant and features like fuzzy search (edit distance search) were not possible.

    In version 0.3.0, we added support for text indexes to efficiently do arbitrary search on STRING columns where each column value is a large BLOB of text. This can be achieved by using the new built-in function TEXT_MATCH.

    where <column_name> is the column text index is created on and <search_expression> can be:

    hashtag
    Sample Datasets

    Text search should ideally be used on STRING columns where doing standard filter operations (EQUALITY, RANGE, BETWEEN) doesn't fit the bill because each column value is a reasonably large blob of text.

    hashtag
    Apache Access Log

    Consider the following snippet from Apache access log. Each line in the log consists of arbitrary data (IP addresses, URLs, timestamps, symbols etc) and represents a column value. Data like this is a good candidate for doing text search.

    Let's say the following snippet of data is stored in ACCESS_LOG_COL column in Pinot table.

    Few examples of search queries on this data:

    Count the number of GET requests.

    Count the number of POST requests that have administrator in the URL (administrator/index)

    Count the number of POST requests that have a particular URL and handled by Firefox browser

    hashtag
    Resume text

    Consider another example of simple resume text. Each line in the file represents skill-data from resumes of different candidates

    Let's say the following snippet of data is stored in SKILLS_COL column in Pinot table. Each line in the input text represents a column value.

    Few examples of search queries on this data:

    Count the number of candidates that have "machine learning" and "gpu processing" - a phrase search (more on this further in the document) where we are looking for exact match of phrases "machine learning" and "gpu processing" not necessarily in the same order in original data.

    Count the number of candidates that have "distributed systems" and either 'Java' or 'C++' - a combination of searching for exact phrase "distributed systems" along with other terms.

    hashtag
    Query Log

    Consider a snippet from a log file containing SQL queries handled by a database. Each line (query) in the file represents a column value in QUERY_LOG_COL column in Pinot table.

    Few examples of search queries on this data:

    Count the number of queries that have GROUP BY

    Count the number of queries that have the SELECT count... pattern

    Count the number of queries that use BETWEEN filter on timestamp column along with GROUP BY

    in the document cover several concrete examples on each kind of query and step-by-step guide on how to write text search queries in Pinot.

    hashtag
    Current restrictions

    Currently we support text search in a restricted manner. More specifically, we have the following constraints:

    • The column type should be STRING.

    • The column should be single-valued.

    • Co-existence of text index with other Pinot indexes is currently not supported.

    The last two restrictions are going to be relaxed very soon in the upcoming releases.

    hashtag
    Co-existence with other indexes

    Currently, a column in Pinot can be dictionary encoded or stored RAW. Furthermore, we can create inverted index on the dictionary encoded column. We can also create a sorted index on the dictionary encoded column.

    Text index is an addition to the type of per-column indexes users can create in Pinot. However, the current implementation supports text index on RAW column. In other words, the column should not be dictionary encoded. As we relax this constraint in upcoming releases, text index can be created on a dictionary encoded column that also has other indexes (inverted, sorted etc).

    hashtag
    How to enable text index?

    Similar to other indexes, users can enable text index on a column through table config. As part of text-search feature, we have also introduced a new generic way of specifying the per-column encoding and index information. In the , there will be a new section with the name "fieldConfigList".

    circle-exclamation

    fieldConfigListis currently ONLY used for text indexes. Our plan is to migrate all other indexes to this model. We are going to do that in upcoming releases and accordingly modify user documentation. So please continue to specify other index info in table config as you have done till now and use the fieldConfigList only for text indexes.

    "fieldConfigList" will be a new section in table config. It is essentially a list of per-column encoding and index information. In the above example, the list contains text index information for two columns text_col_1 and text_col_2. Each object in fieldConfigList contains the following information

    • name - Name of the column text index is enabled on

    • encodingType - As mentioned earlier, we can store a column either as RAW or dictionary encoded. Since for now we have a restriction on the text index, this should always be RAW.

    • indexType - This should be TEXT.

    Since we haven't yet removed the old way of specifying the index info, each column that has a text index should also be specified as noDictionaryColumns in tableIndexConfig:

    The above mechanism can be used to configure text indexes in the following scenarios:

    • Adding a new table with text index enabled on one or more columns.

    • Adding a new column with text index enabled to an existing table.

    • Enabling text index on an existing column.

    circle-info

    When you're using a Text index, we would recommend that you add the indexed column to the noDictionaryColumns columns list to reduce unnecessary storage overhead.

    For instructions on that config property, see the documentation.

    hashtag
    Text Index Creation

    Once the text index is enabled on one or more columns through table config, our segment generation code will pick up the config and automatically create text index (per column). This is exactly how other indexes in Pinot are created.

    Text index is supported for both offline and real-time segments.

    hashtag
    Text parsing and tokenization

    The original text document (a value in the column with text index enabled) is parsed, tokenized and individual "indexable" terms are extracted. These terms are inserted into the index.

    Pinot's text index is built on top of Lucene. Lucene's standard english text tokenizer generally works well for most classes of text. We might want to build custom text parser and tokenizer to suit particular user requirements. Accordingly, we can make this configurable for the user to specify on per column text index basis.

    There is a default set of "stop words" built in Pinot's text index. This is a set of high frequency words in English that are excluded for search efficiency and index size, including:

    Any occurrence of these words in will be ignored by the tokenizer during index creation and search.

    In some cases, users might want to customize the set. A good example would be when IT (Information Technology) appears in the text that collides with "it", or some context-specific words that are not informative in the search. To do this, one can config the words in fieldConfig to include/exclude from the default stop words:

    The words should be comma separated and in lowercase. Duplicated words in both list will end up get excluded.

    hashtag
    Writing Text Search Queries

    A new built-in function TEXT_MATCH has been introduced for using text search in SQL/PQL.

    TEXT_MATCH(text_column_name, search_expression)

    • text_column_name - name of the column to do text search on.

    • search_expression - search query

    We can use TEXT_MATCH function as part of our queries in the WHERE clause. Examples:

    We can also use the TEXT_MATCH filter clause with other filter operators. For example:

    Combining multiple TEXT_MATCH filter clauses

    TEXT_MATCH can be used in WHERE clause of all kinds of queries supported by Pinot

    • Selection query which projects one or more columns

      • User can also include the text column name in select list

    • Aggregation query

    The search expression (second argument to TEXT_MATCH function) is the query string that Pinot will use to perform text search on the column's text index. _**_Following expression types are supported

    hashtag
    Phrase Query

    This query is used to do exact match of a given phrase. Exact match implies that terms in the user-specified phrase should appear in the exact same order in the original text document. Note that document is referred to as the column value.

    Let's take the example of resume text data containing 14 documents to walk through queries. The data is stored in column named SKILLS_COL and we have created a text index on this column.

    Example 1 - Search in SKILL_COL column to look for documents where each matching document MUST contain phrase "distributed systems" as is

    The search expression is '\"Distributed systems\"'

    • The search expression is always specified within single quotes '<your expression>'

    • Since we are doing a phrase search, the phrase should be specified within double quotes inside the single quotes and the double quotes should be escaped

      • '\"<your phrase>\"'

    The above query will match the following documents:

    But it won't match the following document:

    This is because the phrase query looks for the phrase occurring in the original document "as is". The terms as specified by the user in phrase should be in the exact same order in the original document for the document to be considered as a match.

    NOTE: Matching is always done in a case-insensitive manner.

    Example 2 - Search in SKILL_COL column to look for documents where each matching document MUST contain phrase "query processing" as is

    The above query will match the following documents:

    hashtag
    Term Query

    Term queries are used to search for individual terms

    Example 3 - Search in SKILL_COL column to look for documents where each matching document MUST contain the term 'java'

    As mentioned earlier, the search expression is always within single quotes. However, since this is a term query, we don't have to use double quotes within single quotes.

    hashtag
    Composite Query using Boolean Operators

    Boolean operators AND, OR are supported and we can use them to build a composite query. Boolean operators can be used to combine phrase and term queries in any arbitrary manner

    Example 4 - Search in SKILL_COL column to look for documents where each matching document MUST contain phrases "distributed systems" and "tensor flow". This combines two phrases using AND boolean operator

    The above query will match the following documents:

    Example 5 - Search in SKILL_COL column to look for documents where each document MUST contain phrase "machine learning" and term 'gpu' and term 'python'. This combines a phrase and two terms using boolean operator

    The above query will match the following documents:

    When using Boolean operators to combine term(s) and phrase(s) or both, please note that:

    • The matching document can contain the terms and phrases in any order.

    • The matching document may not have the terms adjacent to each other (if this is needed, please use appropriate phrase query for the concerned terms).

    Use of OR operator is implicit. In other words, if phrase(s) and term(s) are not combined using AND operator in the search expression, OR operator is used by default:

    Example 6 - Search in SKILL_COL column to look for documents where each document MUST contain ANY one of:

    • phrase "distributed systems" OR

    • term 'java' OR

    • term 'C++'.

    We can also do grouping using parentheses:

    Example 7 - Search in SKILL_COL column to look for documents where each document MUST contain

    • phrase "distributed systems" AND

    • at least one of the terms Java or C++

    In the below query, we group terms Java and C++ without any operator which implies the use of OR. The root operator AND is used to combine this with phrase "distributed systems"

    hashtag
    Prefix Query

    Prefix searches can also be done in the context of a single term. We can't use prefix matches for phrases.

    Example 8 - Search in SKILL_COL column to look for documents where each document MUST contain text like stream, streaming, streams etc

    The above query will match the following documents:

    hashtag
    Regular Expression Query

    Phrase and term queries work on the fundamental logic of looking up the terms (aka tokens) in the text index. The original text document (a value in the column with text index enabled) is parsed, tokenized and individual "indexable" terms are extracted. These terms are inserted into the index.

    Based on the nature of original text and how the text is segmented into tokens, it is possible that some terms don't get indexed individually. In such cases, it is better to use regular expression queries on the text index.

    Consider server log as an example and we want to look for exceptions. A regex query is suitable for this scenario as it is unlikely that 'exception' is present as an individual indexed token.

    Syntax of a regex query is slightly different from queries mentioned earlier. The regular expression is written between a pair of forward slashes (/).

    The above query will match any text document containing exception.

    hashtag
    Deciding Query Types

    Generally, a combination of phrase and term queries using boolean operators and grouping should allow us to build a complex text search query expression.

    The key thing to remember is that phrases should be used when the order of terms in the document is important and if separating the phrase into individual terms doesn't make sense from end user's perspective.

    An example would be phrase "machine learning".

    However, if we are searching for documents matching Java and C++ terms, using phrase query "Java C++" will actually result in in partial results (could be empty too) since now we are relying the on the user specifying these skills in the exact same order (adjacent to each other) in the resume text.

    Term query using boolean AND operator is more appropriate for such cases

    SELECT SUM(Impressions) 
    FROM myTable 
    WHERE Country = 'USA' 
    AND Browser = 'Chrome' 
    GROUP BY Locale
    "tableIndexConfig": {
      "starTreeIndexConfigs": [{
        "dimensionsSplitOrder": [
          "Country",
          "Browser",
          "Locale"
        ],
        "skipStarNodeCreationForDimensions": [
        ],
        "functionColumnPairs": [
          "SUM__Impressions"
        ],
        "maxLeafRecords": 10000
      }],
      ...
    }
    SELECT COUNT(*) 
    FROM Foo 
    WHERE STRING_COL = 'ABCDCD' 
    AND INT_COL > 2000

    Aggregation GROUP BY query

    Search Expression Type

    Example

    Phrase query

    TEXT_MATCH (<column_name>, '"distributed system"')

    Term Query

    TEXT_MATCH (<column_name>, 'Java')

    Boolean Query

    TEXT_MATCH (<column_name>, 'Java AND c++')

    Prefix Query

    TEXT_MATCH (<column_name>, 'stream*')

    Regex Query

    TEXT_MATCH (<column_name>, '/Exception.*/')

    regexp_likearrow-up-right
    Further sectionsarrow-up-right
    table config
    Raw value forward index
    SELECT COUNT(*) 
    FROM Foo 
    WHERE TEXT_MATCH (<column_name>, '<search_expression>')
    109.169.248.247 - - [12/Dec/2015:18:25:11 +0100] "GET /administrator/ HTTP/1.1" 200 4263 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-
    109.169.248.247 - - [12/Dec/2015:18:25:11 +0100] "POST /administrator/index.php HTTP/1.1" 200 4494 "http://almhuette-raith.at/administrator/" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
    46.72.177.4 - - [12/Dec/2015:18:31:08 +0100] "GET /administrator/ HTTP/1.1" 200 4263 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
    46.72.177.4 - - [12/Dec/2015:18:31:08 +0100] "POST /administrator/index.php HTTP/1.1" 200 4494 "http://almhuette-raith.at/administrator/" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
    83.167.113.100 - - [12/Dec/2015:18:31:25 +0100] "GET /administrator/ HTTP/1.1" 200 4263 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
    83.167.113.100 - - [12/Dec/2015:18:31:25 +0100] "POST /administrator/index.php HTTP/1.1" 200 4494 "http://almhuette-raith.at/administrator/" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
    95.29.198.15 - - [12/Dec/2015:18:32:10 +0100] "GET /administrator/ HTTP/1.1" 200 4263 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
    95.29.198.15 - - [12/Dec/2015:18:32:11 +0100] "POST /administrator/index.php HTTP/1.1" 200 4494 "http://almhuette-raith.at/administrator/" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
    109.184.11.34 - - [12/Dec/2015:18:32:56 +0100] "GET /administrator/ HTTP/1.1" 200 4263 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
    109.184.11.34 - - [12/Dec/2015:18:32:56 +0100] "POST /administrator/index.php HTTP/1.1" 200 4494 "http://almhuette-raith.at/administrator/" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
    91.227.29.79 - - [12/Dec/2015:18:33:51 +0100] "GET /administrator/ HTTP/1.1" 200 4263 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
    SELECT COUNT(*) 
    FROM MyTable 
    WHERE TEXT_MATCH(ACCESS_LOG_COL, 'GET')
    SELECT COUNT(*) 
    FROM MyTable 
    WHERE TEXT_MATCH(ACCESS_LOG_COL, 'post AND administrator AND index')
    SELECT COUNT(*) 
    FROM MyTable 
    WHERE TEXT_MATCH(ACCESS_LOG_COL, 'post AND administrator AND index AND firefox')
    Distributed systems, Java, C++, Go, distributed query engines for analytics and data warehouses, Machine learning, spark, Kubernetes, transaction processing
    Java, Python, C++, Machine learning, building and deploying large scale production systems, concurrency, multi-threading, CPU processing
    C++, Python, Tensor flow, database kernel, storage, indexing and transaction processing, building large scale systems, Machine learning
    Amazon EC2, AWS, hadoop, big data, spark, building high performance scalable systems, building and deploying large scale production systems, concurrency, multi-threading, Java, C++, CPU processing
    Distributed systems, database development, columnar query engine, database kernel, storage, indexing and transaction processing, building large scale systems
    Distributed systems, Java, realtime streaming systems, Machine learning, spark, Kubernetes, distributed storage, concurrency, multi-threading
    CUDA, GPU, Python, Machine learning, database kernel, storage, indexing and transaction processing, building large scale systems
    Distributed systems, Java, database engine, cluster management, docker image building and distribution
    Kubernetes, cluster management, operating systems, concurrency, multi-threading, apache airflow, Apache Spark,
    Apache spark, Java, C++, query processing, transaction processing, distributed storage, concurrency, multi-threading, apache airflow
    Big data stream processing, Apache Flink, Apache Beam, database kernel, distributed query engines for analytics and data warehouses
    CUDA, GPU processing, Tensor flow, Pandas, Python, Jupyter notebook, spark, Machine learning, building high performance scalable systems
    Distributed systems, Apache Kafka, publish-subscribe, building and deploying large scale production systems, concurrency, multi-threading, C++, CPU processing, Java
    Realtime stream processing, publish subscribe, columnar processing for data warehouses, concurrency, Java, multi-threading, C++,
    SELECT SKILLS_COL 
    FROM MyTable 
    WHERE TEXT_MATCH(SKILLS_COL, '"Machine learning" AND "gpu processing"')
    SELECT SKILLS_COL 
    FROM MyTable 
    WHERE TEXT_MATCH(SKILLS_COL, '"distributed systems" AND (Java C++)')
    SELECT count(dimensionCol2) FROM FOO WHERE dimensionCol1 = 18616904 AND timestamp BETWEEN 1560988800000 AND 1568764800000 GROUP BY dimensionCol3 TOP 2500
    SELECT count(dimensionCol2) FROM FOO WHERE dimensionCol1 = 18616904 AND timestamp BETWEEN 1560988800000 AND 1568764800000 GROUP BY dimensionCol3 TOP 2500
    SELECT count(dimensionCol2) FROM FOO WHERE dimensionCol1 = 18616904 AND timestamp BETWEEN 1545436800000 AND 1553212800000 GROUP BY dimensionCol3 TOP 2500
    SELECT count(dimensionCol2) FROM FOO WHERE dimensionCol1 = 18616904 AND timestamp BETWEEN 1537228800000 AND 1537660800000 GROUP BY dimensionCol3 TOP 2500
    SELECT dimensionCol2, dimensionCol4, timestamp, dimensionCol5, dimensionCol6 FROM FOO WHERE dimensionCol1 = 18616904 AND timestamp BETWEEN 1561366800000 AND 1561370399999 AND dimensionCol3 = 2019062409 LIMIT 10000
    SELECT dimensionCol2, dimensionCol4, timestamp, dimensionCol5, dimensionCol6 FROM FOO WHERE dimensionCol1 = 18616904 AND timestamp BETWEEN 1563807600000 AND 1563811199999 AND dimensionCol3 = 2019072215 LIMIT 10000
    SELECT dimensionCol2, dimensionCol4, timestamp, dimensionCol5, dimensionCol6 FROM FOO WHERE dimensionCol1 = 18616904 AND timestamp BETWEEN 1563811200000 AND 1563814799999 AND dimensionCol3 = 2019072216 LIMIT 10000
    SELECT dimensionCol2, dimensionCol4, timestamp, dimensionCol5, dimensionCol6 FROM FOO WHERE dimensionCol1 = 18616904 AND timestamp BETWEEN 1566327600000 AND 1566329400000 AND dimensionCol3 = 2019082019 LIMIT 10000
    SELECT count(dimensionCol2) FROM FOO WHERE dimensionCol1 = 18616904 AND timestamp BETWEEN 1560834000000 AND 1560837599999 AND dimensionCol3 = 2019061805 LIMIT 0
    SELECT count(dimensionCol2) FROM FOO WHERE dimensionCol1 = 18616904 AND timestamp BETWEEN 1560870000000 AND 1560871800000 AND dimensionCol3 = 2019061815 LIMIT 0
    SELECT count(dimensionCol2) FROM FOO WHERE dimensionCol1 = 18616904 AND timestamp BETWEEN 1560871800001 AND 1560873599999 AND dimensionCol3 = 2019061815 LIMIT 0
    SELECT count(dimensionCol2) FROM FOO WHERE dimensionCol1 = 18616904 AND timestamp BETWEEN 1560873600000 AND 1560877199999 AND dimensionCol3 = 2019061816 LIMIT 0
    SELECT COUNT(*) 
    FROM MyTable 
    WHERE TEXT_MATCH(QUERY_LOG_COL, '"group by"')
    SELECT COUNT(*) 
    FROM MyTable 
    WHERE TEXT_MATCH(QUERY_LOG_COL, '"select count"')
    SELECT COUNT(*) 
    FROM MyTable 
    WHERE TEXT_MATCH(QUERY_LOG_COL, '"timestamp between" AND "group by"')
    "fieldConfigList":[
      {
         "name":"text_col_1",
         "encodingType":"RAW",
         "indexType":"TEXT"
      },
      {
         "name":"text_col_2",
         "encodingType":"RAW",
         "indexType":"TEXT"
      }
    ]
    "tableIndexConfig": {
       "noDictionaryColumns": [
         "text_col_1",
         "text_col_2"
     ]}
    "a", "an", "and", "are", "as", "at", "be", "but", "by", "for", "if", "in", "into", "is", "it",
    "no", "not", "of", "on", "or", "such", "that", "the", "their", "then", "than", "there", "these", 
    "they", "this", "to", "was", "will", "with", "those"
    "fieldConfigList":[
      {
         "name":"text_col_1",
         "encodingType":"RAW",
         "indexType":"TEXT",
         "properties": {
            "stopWordInclude": "incl1, incl2, incl3",
            "stopWordExclude": "it"
         }
      }
    ]
    SELECT COUNT(*) FROM Foo WHERE TEXT_MATCH(...)
    SELECT * FROM Foo WHERE TEXT_MATCH(...)
    SELECT COUNT(*) FROM Foo WHERE TEXT_MATCH(...) AND some_other_column_1 > 20000
    SELECT COUNT(*) FROM Foo WHERE TEXT_MATCH(...) AND some_other_column_1 > 20000 AND some_other_column_2 < 100000
    SELECT COUNT(*) FROM Foo WHERE TEXT_MATCH(text_col_1, ....) AND TEXT_MATCH(text_col_2, ...)
    Java, C++, worked on open source projects, coursera machine learning
    Machine learning, Tensor flow, Java, Stanford university,
    Distributed systems, Java, C++, Go, distributed query engines for analytics and data warehouses, Machine learning, spark, Kubernetes, transaction processing
    Java, Python, C++, Machine learning, building and deploying large scale production systems, concurrency, multi-threading, CPU processing
    C++, Python, Tensor flow, database kernel, storage, indexing and transaction processing, building large scale systems, Machine learning
    Amazon EC2, AWS, hadoop, big data, spark, building high performance scalable systems, building and deploying large scale production systems, concurrency, multi-threading, Java, C++, CPU processing
    Distributed systems, database development, columnar query engine, database kernel, storage, indexing and transaction processing, building large scale systems
    Distributed systems, Java, realtime streaming systems, Machine learning, spark, Kubernetes, distributed storage, concurrency, multi-threading
    CUDA, GPU, Python, Machine learning, database kernel, storage, indexing and transaction processing, building large scale systems
    Distributed systems, Java, database engine, cluster management, docker image building and distribution
    Kubernetes, cluster management, operating systems, concurrency, multi-threading, apache airflow, Apache Spark,
    Apache spark, Java, C++, query processing, transaction processing, distributed storage, concurrency, multi-threading, apache airflow
    Big data stream processing, Apache Flink, Apache Beam, database kernel, distributed query engines for analytics and data warehouses
    CUDA, GPU processing, Tensor flow, Pandas, Python, Jupyter notebook, spark, Machine learning, building high performance scalable systems
    Distributed systems, Apache Kafka, publish-subscribe, building and deploying large scale production systems, concurrency, multi-threading, C++, CPU processing, Java
    Realtime stream processing, publish subscribe, columnar processing for data warehouses, concurrency, Java, multi-threading, C++,
    C++, Java, Python, realtime streaming systems, Machine learning, spark, Kubernetes, transaction processing, distributed storage, concurrency, multi-threading, apache airflow
    Databases, columnar query processing, Apache Arrow, distributed systems, Machine learning, cluster management, docker image building and distribution
    Database engine, OLAP systems, OLTP transaction processing at large scale, concurrency, multi-threading, GO, building large scale systems
    SELECT SKILLS_COL 
    FROM MyTable 
    WHERE TEXT_MATCH(SKILLS_COL, '"Distributed systems"')
    Distributed systems, Java, C++, Go, distributed query engines for analytics and data warehouses, Machine learning, spark, Kubernetes, transaction processing
    Distributed systems, database development, columnar query engine, database kernel, storage, indexing and transaction processing, building large scale systems
    Distributed systems, Java, realtime streaming systems, Machine learning, spark, Kubernetes, distributed storage, concurrency, multi-threading
    Distributed systems, Java, database engine, cluster management, docker image building and distribution
    Distributed systems, Apache Kafka, publish-subscribe, building and deploying large scale production systems, concurrency, multi-threading, C++, CPU processing, Java
    Databases, columnar query processing, Apache Arrow, distributed systems, Machine learning, cluster management, docker image building and distribution
    Distributed data processing, systems design experience
    SELECT SKILLS_COL 
    FROM MyTable 
    WHERE TEXT_MATCH(SKILLS_COL, '"query processing"')
    Apache spark, Java, C++, query processing, transaction processing, distributed storage, concurrency, multi-threading, apache airflow
    Databases, columnar query processing, Apache Arrow, distributed systems, Machine learning, cluster management, docker image building and distribution"
    SELECT SKILLS_COL 
    FROM MyTable 
    WHERE TEXT_MATCH(SKILLS_COL, 'Java')
    SELECT SKILLS_COL 
    FROM MyTable 
    WHERE TEXT_MATCH(SKILLS_COL, '"Machine learning" AND "Tensor Flow"')
    Machine learning, Tensor flow, Java, Stanford university,
    C++, Python, Tensor flow, database kernel, storage, indexing and transaction processing, building large scale systems, Machine learning
    CUDA, GPU processing, Tensor flow, Pandas, Python, Jupyter notebook, spark, Machine learning, building high performance scalable systems
    SELECT SKILLS_COL 
    FROM MyTable 
    WHERE TEXT_MATCH(SKILLS_COL, '"Machine learning" AND gpu AND python')
    CUDA, GPU, Python, Machine learning, database kernel, storage, indexing and transaction processing, building large scale systems
    CUDA, GPU processing, Tensor flow, Pandas, Python, Jupyter notebook, spark, Machine learning, building high performance scalable systems
    SELECT SKILLS_COL 
    FROM MyTable 
    WHERE TEXT_MATCH(SKILLS_COL, '"distributed systems" Java C++')
    SELECT SKILLS_COL 
    FROM MyTable 
    WHERE TEXT_MATCH(SKILLS_COL, '"distributed systems" AND (Java C++)')
    SELECT SKILLS_COL 
    FROM MyTable 
    WHERE TEXT_MATCH(SKILLS_COL, 'stream*')
    Distributed systems, Java, realtime streaming systems, Machine learning, spark, Kubernetes, distributed storage, concurrency, multi-threading
    Big data stream processing, Apache Flink, Apache Beam, database kernel, distributed query engines for analytics and data warehouses
    Realtime stream processing, publish subscribe, columnar processing for data warehouses, concurrency, Java, multi-threading, C++,
    C++, Java, Python, realtime streaming systems, Machine learning, spark, Kubernetes, transaction processing, distributed storage, concurrency, multi-threading, apache airflow
    SELECT SKILLS_COL 
    FROM MyTable 
    WHERE text_match(SKILLS_COL, '/.*Exception/')
    TEXT_MATCH(column, '"machine learning"')
    TEXT_MATCH(column, '"Java C++"')
    TEXT_MATCH(column, 'Java AND C++')