LogoLogo
release-1.0.0
release-1.0.0
  • Introduction
  • Basics
    • Concepts
    • Architecture
    • Components
      • Cluster
        • Tenant
        • Server
        • Controller
        • Broker
        • Minion
      • Table
        • Segment
          • Deep Store
        • Schema
      • Pinot Data Explorer
    • Getting Started
      • Running Pinot locally
      • Running Pinot in Docker
      • Quick Start Examples
      • Running in Kubernetes
      • Running on public clouds
        • Running on Azure
        • Running on GCP
        • Running on AWS
      • Batch import example
      • Stream ingestion example
      • HDFS as Deep Storage
      • Troubleshooting Pinot
      • Frequently Asked Questions (FAQs)
        • General
        • Pinot On Kubernetes FAQ
        • Ingestion FAQ
        • Query FAQ
        • Operations FAQ
    • Import Data
      • From Query Console
      • Batch Ingestion
        • Spark
        • Flink
        • Hadoop
        • Backfill Data
        • Dimension table
      • Stream ingestion
        • Apache Kafka
        • Amazon Kinesis
        • Apache Pulsar
      • Stream Ingestion with Upsert
      • Stream Ingestion with Dedup
      • Stream Ingestion with CLP
      • File Systems
        • Amazon S3
        • Azure Data Lake Storage
        • HDFS
        • Google Cloud Storage
      • Input formats
      • Complex Type (Array, Map) Handling
      • Reload a table segment
      • Upload a table segment
    • Indexing
      • Forward Index
      • Inverted Index
      • Star-Tree Index
      • Bloom Filter
      • Range Index
      • Native Text Index
      • Text search support
      • JSON Index
      • Geospatial
      • Timestamp Index
    • Releases
      • Apache Pinotâ„¢ 1.0.0 release notes
      • 0.12.1
      • 0.12.0
      • 0.11.0
      • 0.10.0
      • 0.9.3
      • 0.9.2
      • 0.9.1
      • 0.9.0
      • 0.8.0
      • 0.7.1
      • 0.6.0
      • 0.5.0
      • 0.4.0
      • 0.3.0
      • 0.2.0
      • 0.1.0
    • Recipes
      • GitHub Events Stream
  • For Users
    • Query
      • Querying Pinot
      • Querying JSON data
      • Query Options
      • Aggregation Functions
      • Cardinality Estimation
      • Explain Plan
      • Filtering with IdSet
      • GapFill Function For Time-Series Dataset
      • Grouping Algorithm
      • JOINs
      • Lookup UDF Join
      • Transformation Functions
      • User-Defined Functions (UDFs)
      • Window functions
    • APIs
      • Broker Query API
        • Query Response Format
      • Controller Admin API
      • Controller API Reference
    • External Clients
      • JDBC
      • Java
      • Python
      • Golang
    • Tutorials
      • Use OSS as Deep Storage for Pinot
      • Ingest Parquet Files from S3 Using Spark
      • Creating Pinot Segments
      • Use S3 as Deep Storage for Pinot
      • Use S3 and Pinot in Docker
      • Batch Data Ingestion In Practice
      • Schema Evolution
  • For Developers
    • Basics
      • Extending Pinot
        • Writing Custom Aggregation Function
        • Segment Fetchers
      • Contribution Guidelines
      • Code Setup
      • Code Modules and Organization
      • Update documentation
    • Advanced
      • Data Ingestion Overview
      • Ingestion Aggregations
      • Ingestion Transformations
      • Null Value Support
      • Use the multi-stage query engine (v2)
      • Troubleshoot issues with the multi-stage query engine (v2)
      • Advanced Pinot Setup
    • Plugins
      • Write Custom Plugins
        • Input Format Plugin
        • Filesystem Plugin
        • Batch Segment Fetcher Plugin
        • Stream Ingestion Plugin
    • Design Documents
      • Segment Writer API
  • For Operators
    • Deployment and Monitoring
      • Set up cluster
      • Server Startup Status Checkers
      • Set up table
      • Set up ingestion
      • Decoupling Controller from the Data Path
      • Segment Assignment
      • Instance Assignment
      • Rebalance
        • Rebalance Servers
        • Rebalance Brokers
      • Separating data storage by age
        • Using multiple tenants
        • Using multiple directories
      • Pinot managed Offline flows
      • Minion merge rollup task
      • Consistent Push and Rollback
      • Access Control
      • Monitoring
      • Tuning
        • Real-time
        • Routing
        • Query Routing using Adaptive Server Selection
        • Query Scheduling
      • Upgrading Pinot with confidence
      • Managing Logs
      • OOM Protection Using Automatic Query Killing
    • Command-Line Interface (CLI)
    • Configuration Recommendation Engine
    • Tutorials
      • Authentication
        • Basic auth access control
        • ZkBasicAuthAccessControl
      • Configuring TLS/SSL
      • Build Docker Images
      • Running Pinot in Production
      • Kubernetes Deployment
      • Amazon EKS (Kafka)
      • Amazon MSK (Kafka)
      • Monitor Pinot using Prometheus and Grafana
      • Performance Optimization Configurations
  • Configuration Reference
    • Cluster
    • Controller
    • Broker
    • Server
    • Table
    • Schema
    • Ingestion Job Spec
    • Monitoring Metrics
    • Functions
      • ABS
      • ADD
      • ago
      • ARG_MIN / ARG_MAX
      • arrayConcatDouble
      • arrayConcatFloat
      • arrayConcatInt
      • arrayConcatLong
      • arrayConcatString
      • arrayContainsInt
      • arrayContainsString
      • arrayDistinctInt
      • arrayDistinctString
      • arrayIndexOfInt
      • arrayIndexOfString
      • ARRAYLENGTH
      • arrayRemoveInt
      • arrayRemoveString
      • arrayReverseInt
      • arrayReverseString
      • arraySliceInt
      • arraySliceString
      • arraySortInt
      • arraySortString
      • arrayUnionInt
      • arrayUnionString
      • AVGMV
      • Base64
      • caseWhen
      • ceil
      • CHR
      • codepoint
      • concat
      • count
      • COUNTMV
      • COVAR_POP
      • COVAR_SAMP
      • day
      • dayOfWeek
      • dayOfYear
      • DISTINCT
      • DISTINCTAVG
      • DISTINCTAVGMV
      • DISTINCTCOUNT
      • DISTINCTCOUNTBITMAP
      • DISTINCTCOUNTHLLMV
      • DISTINCTCOUNTHLL
      • DISTINCTCOUNTBITMAPMV
      • DISTINCTCOUNTMV
      • DISTINCTCOUNTRAWHLL
      • DISTINCTCOUNTRAWHLLMV
      • DISTINCTCOUNTRAWTHETASKETCH
      • DISTINCTCOUNTTHETASKETCH
      • DISTINCTSUM
      • DISTINCTSUMMV
      • DIV
      • DATETIMECONVERT
      • DATETRUNC
      • exp
      • FLOOR
      • FromDateTime
      • FromEpoch
      • FromEpochBucket
      • FUNNELCOUNT
      • Histogram
      • hour
      • isSubnetOf
      • JSONFORMAT
      • JSONPATH
      • JSONPATHARRAY
      • JSONPATHARRAYDEFAULTEMPTY
      • JSONPATHDOUBLE
      • JSONPATHLONG
      • JSONPATHSTRING
      • jsonextractkey
      • jsonextractscalar
      • length
      • ln
      • lower
      • lpad
      • ltrim
      • max
      • MAXMV
      • MD5
      • millisecond
      • min
      • minmaxrange
      • MINMAXRANGEMV
      • MINMV
      • minute
      • MOD
      • mode
      • month
      • mult
      • now
      • percentile
      • percentileest
      • percentileestmv
      • percentilemv
      • percentiletdigest
      • percentiletdigestmv
      • percentilekll
      • percentilerawkll
      • percentilekllmv
      • percentilerawkllmv
      • quarter
      • regexpExtract
      • regexpReplace
      • remove
      • replace
      • reverse
      • round
      • ROW_NUMBER
      • rpad
      • rtrim
      • second
      • SEGMENTPARTITIONEDDISTINCTCOUNT
      • sha
      • sha256
      • sha512
      • sqrt
      • startswith
      • ST_AsBinary
      • ST_AsText
      • ST_Contains
      • ST_Distance
      • ST_GeogFromText
      • ST_GeogFromWKB
      • ST_GeometryType
      • ST_GeomFromText
      • ST_GeomFromWKB
      • STPOINT
      • ST_Polygon
      • strpos
      • ST_Union
      • SUB
      • substr
      • sum
      • summv
      • TIMECONVERT
      • timezoneHour
      • timezoneMinute
      • ToDateTime
      • ToEpoch
      • ToEpochBucket
      • ToEpochRounded
      • TOJSONMAPSTR
      • toGeometry
      • toSphericalGeography
      • trim
      • upper
      • Url
      • UTF8
      • VALUEIN
      • week
      • year
      • yearOfWeek
      • Extract
    • Plugin Reference
      • Stream Ingestion Connectors
      • VAR_POP
      • VAR_SAMP
      • STDDEV_POP
      • STDDEV_SAMP
  • Reference
    • Single-stage query engine (v1)
    • Multi-stage query engine (v2)
  • RESOURCES
    • Community
    • Team
    • Blogs
    • Presentations
    • Videos
  • Integrations
    • Tableau
    • Trino
    • ThirdEye
    • Superset
    • Presto
    • Spark-Pinot Connector
  • Contributing
    • Contribute Pinot documentation
    • Style guide
Powered by GitBook
On this page
  • Summary
  • Notable New Features
  • Special notes
  • Major Bug fixes
  • Backward Incompatible Changes

Was this helpful?

Export as PDF
  1. Basics
  2. Releases

0.6.0

This release introduced some excellent new features, including upsert, tiered storage, pinot-spark-connector, support of having clause, more validations on table config and schema, support of ordinals

Previous0.7.1Next0.5.0

Was this helpful?

Summary

This release introduced some excellent new features, including upsert, tiered storage, pinot-spark-connector, support of having clause, more validations on table config and schema, support of ordinals in GROUP BY and ORDER BY clause, array transform functions, adding push job type of segment metadata only mode, and some new APIs like updating instance tags, new health check endpoint. It also contains many key bug fixes. See details below.

The release was cut from the following commit: and the following cherry-picks:

Notable New Features

  • Tiered storage ()

  • Upsert feature (, , , , )

  • Pre-generate aggregation functions in QueryContext ()

  • Adding controller healthcheck endpoint: /health ()

  • Add pinot-spark-connector ()

  • Support multi-value non-dictionary group by ()

  • Support type conversion for all scalar functions ()

  • Add additional datetime functionality ()

  • Support post-aggregation in ORDER-BY ()

  • Support post-aggregation in SELECT ()

  • Add RANGE FilterKind to support merging ranges for SQL ()

  • Add HAVING support (5889)

  • Support for exact distinct count for non int data types ()

  • Add max qps bucket count ()

  • Add Range Indexing support for raw values ()

  • Add IdSet and IdSetAggregationFunction ()

  • [Deepstore by-pass]Add a Deepstore bypass integration test with minor bug fixes. ()

  • Add Hadoop counters for detecting schema mismatch ()

  • Add RawThetaSketchAggregationFunction ()

  • Instance API to directly updateTags ()

  • Add streaming query handler ()

  • Add InIdSetTransformFunction ()

  • Add ingestion descriptor in the header ()

  • Zookeeper put api ()

  • Feature/#5390 segment indexing reload status api ()

  • Segment processing framework ()

  • Support streaming query in QueryExecutor ()

  • Add list of allowed tables for emitting table level metrics ()

  • Add FilterOptimizer which supports optimizing both PQL and SQL query filter ()

  • Adding push job type of segment metadata only mode ()

  • Minion taskExecutor for RealtimeToOfflineSegments task (, )

  • Adding array transform functions: array_average, array_max, array_min, array_sum ()

  • Allow modifying/removing existing star-trees during segment reload ()

  • Implement off-heap bloom filter reader ()

  • Support for multi-threaded Group By reducer for SQL. ()

  • Add OnHeapGuavaBloomFilterReader ()

  • Support using ordinals in GROUP BY and ORDER BY clause ()

  • Merge common APIs for Dictionary ()

  • Add table level lock for segment upload ([#6165])

  • Added recursive functions validation check for group by ()

  • Add StrictReplicaGroupInstanceSelector ()

  • Add IN_SUBQUERY support ()

  • Add IN_PARTITIONED_SUBQUERY support ()

  • Some UI features (, , , )

Special notes

  • Brokers should be upgraded before servers in order to keep backward-compatible:

  • Pinot Components have to be deployed in the following order:

    (PinotServiceManager -> Bootstrap services in role ServiceRole.CONTROLLER -> All remaining bootstrap services in parallel)

    • New settings introduced and old ones deprecated:

  • This aggregation function is still in beta version. This PR involves change on the format of data sent from server to broker, so it works only when both broker and server are upgraded to the new version:

Major Bug fixes

Backward Incompatible Changes

Change group key delimiter from '\t' to '\0' ()

Support for exact distinct count for non int data types ()

Starts Broker and Server in parallel when using ServiceManager ()

Make real-time threshold property names less ambiguous ()

Change Signature of Broker API in Controller ()

Enhance DistinctCountThetaSketchAggregationFunction ()

Improve performance of DistinctCountThetaSketch by eliminating empty sketches and unions. ()

Enhance VarByteChunkSVForwardIndexReader to directly read from data buffer for uncompressed data ()

Fixing backward-compatible issue of schema fetch call ()

Fix race condition in MetricsHelper ()

Fixing the race condition that segment finished before ControllerLeaderLocator created. ()

Fix CSV and JSON converter on BYTES column ()

Fixing the issue that transform UDFs are parsed as function name 'OTHER', not the real function names ()

Incorporating embedded exception while trying to fetch stream offset ()

Use query timeout for planning phase ()

Add null check while fetching the schema ()

Validate timeColumnName when adding/updating schema/tableConfig ()

Handle the partitioning mismatch between table config and stream ()

Fix built-in virtual columns for immutable segment ()

Refresh the routing when real-time segment is committed ()

Add support for Decimal with Precision Sum aggregation ()

Fixing the calls to Helix to throw exception if zk connection is broken ()

Allow modifying/removing existing star-trees during segment reload ()

Add max length support in schema builder ()

Enhance star-tree to skip matching-all predicate on non-star-tree dimension ()

Make real-time threshold property names less ambiguous ()

Enhance DistinctCountThetaSketchAggregationFunction ()

Deep Extraction Support for ORC, Thrift, and ProtoBuf Records ()

e5c9bec
d033a11
#5793
#6096
#6113
#6141
#6149
#6167
#5805
#5846
#5787
#5851
#5849
#5438
#5856
#5867
#5898
#5889
#5872
#5922
#5853
#5926
#5857
#5873
#5970
#5902
#5717
#5973
#5995
#5949
#5718
#5934
#6027
#6037
#6056
#5967
#6050
#6124
#6084
#6100
#6118
#6044
#6147
#6152
#6176
#6186
#6208
#6022
#6043
#5810
#5981
#6117
#6215
#5858
#5872
#5917
#5953
#6119
#6004
#5798
#5816
#5885
#5887
#5864
#5931
#5940
#5956
#5990
#5994
#5966
#6031
#6042
#6078
#6053
#6069
#6100
#6112
#6109
#5953
#6004
#6046