# Plugins

Starting from the 0.3.X release, Pinot supports a plug-and-play architecture. This means that starting from version 0.3.0, Pinot can be easily extended to support new tools, like streaming services, storage systems, input formats, and metrics providers.

![](/files/-M3ADvQeYPBSi7dQECkL)

Plugins are collected in folders, based on their purpose. Pinot organizes its plugins into **eleven plugin families**, each targeting a specific extensibility need. The table below summarizes every family, its SPI module, and the implementations that ship with Pinot.

## Plugin Families at a Glance

| Plugin Family            | SPI Interface / Module                     | Built-in Implementations                                                                                                      |
| ------------------------ | ------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------- |
| **Input Format**         | `RecordReader` / `StreamMessageDecoder`    | Avro, CSV, JSON, ORC, Parquet, Thrift, Protobuf, Arrow, CLP-Log, Confluent Avro, Confluent JSON, Confluent Protobuf           |
| **Filesystem**           | `PinotFS`                                  | S3, GCS, HDFS, ADLS                                                                                                           |
| **Stream Ingestion**     | `StreamConsumerFactory`                    | Kafka 3.0, Kafka 4.0, Kinesis, Pulsar                                                                                         |
| **Batch Ingestion**      | `IngestionJobRunner`                       | Standalone, Hadoop, Spark 3                                                                                                   |
| **Metrics**              | `PinotMetricsFactory`                      | Dropwizard, Yammer, Compound                                                                                                  |
| **Segment Writer**       | `SegmentWriter`                            | File-based                                                                                                                    |
| **Segment Uploader**     | `SegmentUploader`                          | Default                                                                                                                       |
| **Minion Tasks**         | `PinotTaskGenerator` / `PinotTaskExecutor` | MergeRollup, Purge, RealtimeToOfflineSegments, SegmentGenerationAndPush, UpsertCompaction, UpsertCompactMerge, RefreshSegment |
| **Environment**          | `PinotEnv`                                 | Azure                                                                                                                         |
| **Time Series Language** | `TimeSeriesLogicalPlanner`                 | M3QL                                                                                                                          |
| **OpChain Converter**    | `OpChainConverter`                         | Default                                                                                                                       |

***

### Input Format

Input format plugins read data from files or streams during data ingestion. Batch ingestion uses `RecordReader` implementations, while real-time ingestion uses `StreamMessageDecoder` implementations.

{% content-ref url="/pages/-M8oyQalmSX4AfVP-\_Fq" %}
[Supported Data Formats](/build-with-pinot/ingestion/formats-filesystems/pinot-input-formats.md)
{% endcontent-ref %}

### Filesystem

Filesystem plugins provide a storage abstraction layer so that Pinot segments can be stored on and fetched from different storage backends.

{% content-ref url="/pages/-M8oyo30JfLVfInxdnwH" %}
[File Systems](/build-with-pinot/ingestion/formats-filesystems/file-systems.md)
{% endcontent-ref %}

### Stream Ingestion

Stream ingestion plugins allow Pinot to consume data from real-time streaming platforms.

{% content-ref url="/pages/7nZ8MNkf7il1PpzpyxnR" %}
[Stream Ingestion Guide](/build-with-pinot/ingestion/stream-ingestion/stream-ingestion.md)
{% endcontent-ref %}

### Batch Ingestion

Batch ingestion plugins run data ingestion jobs on different execution frameworks.

{% content-ref url="/pages/yTkz28tBnlQKDSrYYwjt" %}
[Batch Ingestion Guide](/build-with-pinot/ingestion/batch-ingestion/batch-ingestion.md)
{% endcontent-ref %}

### Metrics

Metrics plugins control which metrics library Pinot uses to collect and expose internal metrics via JMX. Pinot ships with Dropwizard (default), Yammer, and a Compound implementation that can fan out to multiple registries simultaneously.

{% content-ref url="/pages/pRU5obzJvcggYu1v83Og" %}
[Metrics Plugin](/develop-and-contribute/plugin-architecture/write-custom-plugins/metrics-plugin.md)
{% endcontent-ref %}

### Segment Writer

The Segment Writer plugin provides an API for programmatically collecting `GenericRow` records and building Pinot segments without going through a full batch ingestion job. The built-in file-based implementation buffers rows as Avro records on local disk.

{% content-ref url="/pages/8UT3DZSMuMLCk5ljHwLv" %}
[Segment Writer Plugin](/develop-and-contribute/plugin-architecture/write-custom-plugins/segment-writer-plugin.md)
{% endcontent-ref %}

### Segment Uploader

The Segment Uploader plugin handles uploading completed segment tar files to the Pinot cluster. The default implementation supports all push modes configured via `batchConfigMaps` in the table config.

{% content-ref url="/pages/jxfM2hMiF443YjKNY7p9" %}
[Segment Uploader Plugin](/develop-and-contribute/plugin-architecture/write-custom-plugins/segment-uploader-plugin.md)
{% endcontent-ref %}

### Minion Tasks

Minion task plugins define background processing tasks that run on Pinot Minion nodes. Built-in tasks include segment merge/rollup, purge, real-time to offline conversion, upsert compaction, and more.

{% content-ref url="/pages/-M1Swkf2kSXi8fRpwfqz" %}
[Minion](/architecture-and-concepts/components/cluster/minion.md)
{% endcontent-ref %}

### Environment

Environment plugins allow Pinot to integrate with cloud-specific features and configurations. The Azure environment plugin provides Azure-specific functionality.

### Time Series Language

Time series language plugins allow Pinot to support custom time series query languages like PromQL or M3QL.

{% content-ref url="/pages/BSdITD6CNryOBlCk7Y7D" %}
[Time Series Language Plugin](/develop-and-contribute/plugin-architecture/write-custom-plugins/time-series-language-plugin.md)
{% endcontent-ref %}

### OpChain Converter

OpChain Converter plugins provide custom implementations for converting logical query plans into executable OpChain objects in the multi-stage query engine. This enables alternative execution backends and plan-to-execution strategies.

{% content-ref url="/pages/XnMe8tkCarnWJVq3nCZX" %}
[Opchain Converter Plugin](/develop-and-contribute/plugin-architecture/write-custom-plugins/opchain-converter-plugin.md)
{% endcontent-ref %}

***

## Developing Plugins

Plugins can be developed with no restriction. There are some standards that have to be followed, though. The plugin has to implement the interfaces from [pinot-spi](https://github.com/apache/pinot/tree/master/pinot-spi/src/main/java/org/apache/pinot/spi).

Custom segment or index extensions that depend on `pinot-segment-spi` are a separate, more upgrade-sensitive path than the stable plugin families listed above. Revalidate these extensions on every Pinot upgrade. For example, Pinot 1.6.0 adds two required `IndexType` methods for custom index implementations: `requiresDictionary(FieldSpec, C)` and `shouldInvalidateOnDictionaryChange(FieldSpec, C)`. See the [upgrade notes](/operate-pinot/upgrades/upgrade-notes.md) before upgrading custom segment/index extensions.

{% content-ref url="/pages/-ME3TJrrDVeegp12y1v3" %}
[Write Custom Plugins](/develop-and-contribute/plugin-architecture/write-custom-plugins.md)
{% endcontent-ref %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.pinot.apache.org/develop-and-contribute/plugin-architecture.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
