Import Data

This page lists options for importing data into Pinot with links to detailed instructions with examples.

There are multiple options for importing data into Pinot. The pages in this section provide step-by-step instructions for importing records into Pinot, supported by our plugin architecture. The intent is to get you up and running with imported data as quickly as possible.

Pinot supports multiple file input formats without needing to change anything other than the file name. Each example imports a readsdsdy-made dataset so you can see how things work without needing to find or create your own dataset.

Pinot Batch Ingestion

These guides show you how to import data from popular big data platforms.

Pinot Stream Ingestion

This guide shows you how to import data using stream ingestion from Apache Kafka topics.

This guide shows you how to import data using stream ingestion with upsert.

This guide shows you how to import data using stream ingestion with deduplication.

This guide shows you how to import data using stream ingestion with CLP.

Pinot file systems

By default, Pinot does not come with a storage layer, so all the data sent won't be stored in case of system crash. In order to persistently store the generated segments, you will need to change controller and server configs to add a deep storage. See File systems for all the info and related configs.

These guides show you how to import data and persist it in these file systems.

Pinot input formats

This guide shows you how to import data from various Pinot-supported input formats.

This guide shows you how to handle the complex type in the ingested data, such as map and array.

This guide shows you how to handle records with dynamic schemas, like JSON log events.

Reloading and uploading existing Pinot segments

This guide shows you how to reload Pinot segments from your deep store.

This guide shows you how to upload Pinot segments from an old, closed Pinot instance.