The Docker instructions on this page are still WIP
This example assumes you have set up your cluster using Pinot in Docker.
Data Stream
First, we need to set up a stream. Pinot has out-of-the-box real-time ingestion support for Kafka. Other streams can be plugged in for use, see Pluggable Streams.
Let's set up a demo Kafka cluster locally, and create a sample topic transcript-topic.
If you followed Batch upload sample data, you have already pushed a schema for your sample table. If not, see Creating a schema to learn how to create a schema for your sample data.
Creating a table configuration
If you followed Batch upload sample data, you pushed an offline table and schema. To create a real-time table configuration for the sample use this table configuration for the transcript table. For a more detailed overview about table, see Table.
As soon as data flows into the stream, the Pinot table will consume it and it will be ready for querying. Browse to the Query Console running in your Pinot instance (we use localhost in this link as an example) to examine the real-time data.