Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
When Pinot segment files are created in external systems (hadoop/spark/etc), there are several ways to push those data to Pinot Controller and Server:
push segment to shared NFS and let Pinot pull segment files from the location of that NFS.
push segment to a Web server and let Pinot pull segment files from the Web server with http/https link.
push segment to HDFS and let Pinot pull segment files from HDFS with hdfs location uri.
push segment to other system and implement your own segment fetcher to pull data from those systems.
The first two options should be supported out of the box with Pinot package. As long your remote jobs send Pinot controller with the corresponding URI to the files it will pick up the file and allocate it to proper Pinot Servers and brokers. To enable Pinot support for HDFS, you will need to provide Pinot Hadoop configuration and proper Hadoop dependencies.
In your Pinot controller/server configuration, you will need to provide the following configs:
or
This path should point the local folder containing core-site.xml
and hdfs-site.xml
files from your Hadoop installation
or
These two configs should be the corresponding Kerberos configuration if your Hadoop installation is secured with Kerberos. Please check Hadoop Kerberos guide on how to generate Kerberos security identification.
You will also need to provide proper Hadoop dependencies jars from your Hadoop installation to your Pinot startup scripts.
To push HDFS segment files to Pinot controller, you just need to ensure you have proper Hadoop configuration as we mentioned in the previous part. Then your remote segment creation/push job can send the HDFS path of your newly created segment files to the Pinot Controller and let it download the files.
For example, the following curl requests to Controller will notify it to download segment files to the proper table:
You can also implement your own segment fetchers for other file systems and load into Pinot system with an external jar. All you need to do is to implement a class that extends the interface of SegmentFetcher and provides config to Pinot Controller and Server as follows:
or
You can also provide other configs to your fetcher under config-root pinot.server.segment.fetcher.<protocol>
Pinot has many inbuilt Aggregation Functions such as MIN, MAX, SUM, AVG etc. See PQL page for the list of aggregation functions.
Adding a new AggregationFunction requires two things
Implement AggregationFunction interface and make it available as part of the classpath
Register the function in AggregationFunctionFactory. As of today, this requires code change in Pinot but we plan to add the ability to plugin Functions without having to change Pinot code.
To get an overall idea, see MAX Aggregation Function implementation. All other implementations can be found here.
Lets look at the key methods to implements in AggregationFunction
Before getting into the implementation, it's important to understand how Aggregation works in Pinot.
This is advanced topic and assumes you know Pinot concepts. All the data in Pinot is stored in segments across multiple nodes. The query plan at a high level comprises of 3 phases
1. Map phase
This phase works on the individual segments in Pinot.
Initialization: Depending on the query type the following methods are invoked to setup the result holder. While having different methods and return types adds complexity, it helps in performance.
AGGREGATION : createAggregationResultHolder
This must return an instance of type AggregationResultHolder. You can either use the DoubleAggregationResultHolder or ObjectAggregationResultHolder
GROUP BY: createGroupByResultHolder
This method must return an instance of type GroupByResultHolder. Depending on the type of result object, you might be able to use one of the existing implementations.
Callback: For every record that matches the filter condition in the query,
one of the following methods are invoked depending on the queryType(aggregation vs group by) and columnType(single-value vs multi-value). Note that we invoke this method for a batch of records instead of every row for performance reasons and allows JVM to vectorize some of parts of the execution if possible.
AGGREGATION: aggregate(int length, AggregationResultHolder aggregationResultHolder, Map<String,BlockValSet> blockValSetMap)
length: This represent length of the block. Typically < 10k
aggregationResultHolder: this is the object returned fromcreateAggregationResultHolder
blockValSetMap: Map of blockValSets depending on the arguments to the AggFunction
Group By Single Value: aggregateGroupBySV(int length, int[] groupKeyArray, GroupByResultHolder groupByResultHolder, Map blockValSets)
length: This represent length of the block. Typically < 10k
groupKeyArray: Pinot internally maintains a value to int mapping and this groupKeyArray maps to the internal mapping. These values together form a unique key.
groupByResultHolder: This is the object returned fromcreateGroupByResultHolder
blockValSetMap: Map of blockValSets depending on the arguments to the AggFunction
Group By Multi Value: aggregateGroupBySV(int length, int[] groupKeyArray, GroupByResultHolder groupByResultHolder, Map blockValSets)
length: This represent length of the block. Typically < 10k
groupKeyArray: Pinot internally maintains a value to int mapping and this groupKeyArray maps to the internal mapping. These values together form a unique key.
groupByResultHolder: This is the object returned fromcreateGroupByResultHolder
blockValSetMap: Map of blockValSets depending on the arguments to the AggFunction
2. Combine phase
In this phase, the results from all segments within a single pinot server are combined into IntermediateResult. The type of IntermediateResult is based on the Generic Type defined in the AggregationFunction implementation.
3. Reduce phase
There are two steps in the Reduce Phase
Merge all the IntermediateResult's from various servers using the merge function
Extract the final results by invoking the extractFinalResult method. In most cases, FinalResult is same type as IntermediateResult. AverageAggregationFunction is an example where IntermediateResult (AvgPair) is different from FinalResult(Double)
Before you begin to contribute, make sure you have reviewed Dev Environment Setup and Code Modules and Organization sections and that you have created your own fork of the pinot source code.
The Apache Pinot community encourages members to contribute to the overall growth and success of the project. All contributors are expected to follow the following guidelines when proposing an enhancement (aka PEP - Pinot Enhancement Proposal):
All enhancements, regardless of scope/size, must start with a Github issue. The issue should clearly state the following information:
What needs to be done?
Why the feature is needed (e.g. describing the use case).
It may also include an initial idea/proposal on how as well.
The Github issue must be tagged with the label PEP-Request.
Once the Github issue is filed:
The PMC would decide if a detailed proposal/design-doc is required or can simply be followed by a PR.
There should be enough time (e.g. 5 business days) given for the PMC to review the issue/proposal before moving to implementation.
One +1 and zero -1 votes from the PMC may be used to proceed with the implementation.
If during the course of the implementation it is found that the feature is much more complex than initially anticipated, the PMC may request for a detailed design doc.
The PMC would use the following guideline when deciding whether a PEP requires an explicit proposal/design doc, or can simply be followed by a PR that includes a link to the Github issue.
Any new major feature, subsystem, or piece of functionality.
Any change that may potentially create backward incompatibility:
Any change that impacts the public interfaces of the project.
Any changes to SPI
Adding new API resources, or changing broker-server-controller communications.
Any change that can impact performance.
If the requests get at least one +1 and no -1 from the PMC to directly go to the PR stage, the requestor can then submit the PR along with a link to the Github issue.
If the request requires a proposal, then the requestor is expected to provide a proposal design doc before submitting a PR for review. The design doc should have public read and comment access. (If your organization does not allow public access, please look to other freely available platforms to host your document). The design doc must include the following:
Motivation: Describe the problem to be solved including the details on why such as use-case, etc.
Proposed Change: Describe the new thing that needs to be done. This may be fairly extensive and have large subsections of its own. Or it may be a few sentences, depending on the scope of the change. Also, describe “How” with details and possible POC.
New or Changed Public Interfaces: impact to any of the "compatibility commitments" described above. We want to call these out in particular so everyone thinks about them.
Deployment, Migration Plan and Compatibility: if this feature requires additional support for a no-downtime upgrade describe how that will work
Rejected Alternatives: What are the other alternatives you considered and why are they worse? The goal of this section is to help people understand why this is the best solution now, and also to prevent churn in the future when old alternatives are reconsidered.
PMC Review Status: The proposal/design doc may also contain a review status table at the beginning of the doc that includes the reviewer names along with their review status.
The proposal/design doc should be in a google doc that has comment access enabled by default to any community member (should not require asking for permissions). Only exceptions are small features where the initial proposal in the issue is generally accepted. Once the proposal/design doc is approved (all questions/comments resolved), it must be transferred into a common Google Drive where all Pinot proposal/design docs must be submitted.
If there are meetings/discussions offline with a subset of members, the meeting notes should be captured and added to the doc.
General Guidelines
Smaller PRs that are easier to review
Pure refactoring PRs must be separated from PRs that change functionality. Refactoring PRs may state so, as an aid for the reviewers. For example, package moves may show up as huge diff’s in the PR.
If your change is relatively minor, you can skip this step. If you are adding new major feature, we suggest that you add a design document and solicit comments from the community before submitting any code.
Here is a list of current design documents.
Create a Pinot issue here for the change you would like to make. Provide information on why the change is needed and how you plan to address it. Use the conversations on the issue as a way to validate assumptions and the right way to proceed. Be sure to review sections on Backward and Forward compatibility changes and External libraries.
If you have a design document, please refer to the design documents in your Issue. You may even want to create multiple issues depending on the extent of your change.
Once you are clear about what you want to do, proceed with the next steps listed below.
Make the necessary changes. If the changes you plan to make are too big, make sure you break it down into smaller tasks.
Follow the recommendations/best-practices noted here when you are making changes.
Please ensure your code is adequately documented. Some things to consider for documentation:
Always include class level java docs. At the top class level, we are looking for information about what functionality is provided by the class, what state is maintained by the class, whether there are concurrency/thread-safety concerns and any exceptional behavior that the class might exhibit.
Document public methods and their parameters.
Ensure there is adequate logging for positive paths as well as exceptional paths. As a corollary to this, ensure logs are not noisy.
Do not use System.out.println to log messages. Use the slf4j
loggers.
Use logging levels correctly: set level to debug
for verbose logs useful for only for debugging.
Do not log stack traces via printStackTrace
method of the exception.
Where possible, throw specific exceptions, preferably checked exceptions, so the callers can easily determine what the erroneous conditions that need to be handled are.
Avoid catching broad exceptions (i.e., catch (Exception e)
blocks), except for when this is in the run()
method of a thread/runnable.
Current Pinot code does not strictly adhere to this, but we would like to change this over time and adopt best practices around exception handling.
If you are making any changes to state stored, either in Zookeeper or in segments, make sure you consider both backward and forward compatibility issues.
For backward compatibility, consider cases where one component is using the new version and another is still on the old version. E.g., when the request format between broker and server is updated, consider resulting behaviors when a new broker is talking to an older server. Will it break?
For forward compatibility, consider rollback cases. E.g., consider what happens when state persisted by new code is handled by old code. Does the old code skip over new fields?
Be cautious about pulling in external dependencies. You will need to consider multiple things when faced with a need to pull in a new library.
What capability is the addition of the library providing you with? Can existing libraries provide this functionality (may be with a little bit of effort)?
Is the external library maintained by an active community of contributors?
What are the licensing terms for the library. For more information about handling licenses, see License Headers for newly added files.
Are you adding the library to Foundational modules modules? This will affect the rest of the Pinot code base. If the new library pulls in a lot of transitive dependencies, then we might encounter unexpected issues with multiple classes in the classpath. These issues are hard to catch with tests as the order of loading the libraries at runtime matters. If you absolutely need the support, consider adding it via extension modules, see Extension modules.
Automated tests are always recommended for contributions. Make sure you write tests so that:
You verify the correctness of your contribution. This serves as proof to you as well as the reviewers.
You future proof your contributions against code refactors or other changes. While this may not always be possible (see Testing Guidelines), it's a good goal to aim for.
Identify a list of tests for the changes you have made. Depending on the scope of changes, you may need one or more of the following tests:
Unit Tests
Make sure your code has the necessary class or method level unit tests. It is important to write both positive case as well as negative case tests. Document your tests well and add meaningful assertions in the tests; when the assertions fail, ensure that the right messages are logged with information that allows other to debug.
Integration Tests
Add integration tests to cover End-to-End paths without relying on mocking (see note below). You MUST
add integration tests for REST APIs, and must include tests that cover different error codes; i.e., 200 OK, 4xx or 5xx errors that are explicit contracts of the API.
Mocking
Use Mockito to mock classes to control specific behaviors - e.g., simulate various error conditions.
Note
DO NOT use advanced mock libraries such as PowerMock. They make bytecode level changes to allow tests for static/private members but this typically results in other tools like jacoco to fail. They also promote incorrect implementation choices that make it harder to test additional changes. When faced with a choice to use PowerMock or advanced mocking options, you might either need to refactor the code to work better with mocking or you actually need to write an integration test instead of a unit test.
Validate assumptions in tests
Make sure that adequate asserts are added in the tests to verify that the tests are passing for the right reasons.
Write reliable tests
Make sure you are writing tests that are reliable. If the tests depend on asynchronous events to be fired, do not add sleep
to your tests. Where possible, use appropriate mocking or condition based triggers.
All source code files should have license headers. To automatically add the header for any new file you plan to checkin, run in pinot
top-level folder:
Note
If you checkin third-party code or files, please make sure you review Apache guidelines:
Once you determine the code you are pulling in adhere to the guidelines above, go ahead pull the changes in. Do not add license headers for them. Follow these instructions to ensure we are compliant with Apache Licensing process:
Under pinot/licenses
add a LICENSE-<newlib> file that has the license terms of the included library.
Update the pinot/LICENSE
file to indicate the newly added library file paths under the corresponding supported Licenses.
Update the exclusion rules for license
and rat
maven plugins in the parent pom: pinot/pom.xml
.
If attention to the licensing terms in not paid early on, they will be caught much later in the process, when we prepare to make a new release. Updating code at that time to work with the right libraries at that time might require bigger refactoring changes and delay the release process.
Verifying code-style
Run the following command to verify the code-style before posting a PR
Run tests
Before you create a review request for the changes, make sure you have run the corresponding unit tests for your changes. You can run individual tests via the IDE or via maven command-line. Finally run all tests locally by running mvn clean install -Pbin-dist
.
For changes that are related to performance issues or race conditions, it is hard to write reliable tests, so we recommend running manual stress tests to validate the changes. You MUST
note the manual tests done in the PR description.
Push changes and create a PR for review
Commit your changes with a meaningful commit message.
Once you receive comments on github on your changes, be sure to respond to them on github and address the concerns. If any discussions happen offline for the changes in question, make sure to capture the outcome of the discussion, so others can follow along as well.
It is possible that while your change is being reviewed, other changes were made to the master branch. Be sure to pull rebase your change on the new changes thus:
When you have addressed all comments and have an approved PR, one of the committers can merge your PR.
After your change is merged, check to see if any documentation needs to be updated. If so, create a PR for documentation.
Usually for new features, functionalities, API changes, documentation update is required to keep users up to date and keep track of our development.
Please follow this link to Update Document accordingly
Pinot documentation is powered by Gitbook, and a bi-directional Github integration is set up to back up all the changes.
The git repo is here: https://github.com/pinot-contrib/pinot-docs
For Pinot Contributor, there are majorly two ways to update the documentations.
This follows the old fashion of updating documentations.
You can checkout pinot-docs repo and modify the documentation accordingly then submit a PullRequest for review.
Once the PR got merged, the changes will automatically applied to corresponding Gitbook pages.
Please note that all Gitbook documentation follows Markdown Syntax.
Once granted edit permission, contributors could edit any page on Gitbook and then save and merge the changes by themselves. This is one example commit on Github repo to reflect the updates coming from Git book: Adding Update Document Page Commit.
Usually we grant edit permission to committers and active contributors.
Please contact admin(Email to dev@pinot.apache.org with the content you wanna add) to ask for edit permission for Pinot Gitbook.
Once granted the permission, you can directly working on Pinot Gitbook UI to modify the documentation, and merge changes.
TODO: Deprecated
Before proceeding to contributing changes to Pinot, review the contents of this section.
Pinot depends on a number of external projects, the most notable ones are:
Apache Zookeeper
Apache Helix
Apache Kafka
Apache Thrift
Netty
Google Guava
Yammer
Helix is used for ClusterManagement, and Pinot code is tightly integrated with Helix and Zookeeper interfaces.
Kafka is the default realtime stream provider, but can be replaced with others. See customizations section for more info.
Thrift is used for message exchange between broker and server components, with Netty providing the server functionality for processing messages in a non-blocking fashion.
Guava is used for number of auxiliary components such as Caches and RateLimiters. Yammer metrics is used to register and expose metrics from Pinot components.
In addition, Pinot relies on several key external libraries for some of its core functionality: Roaring Bitmaps: Pinot’s inverted indices are built using RoaringBitmap library. t-Digest: Pinot’s digest based percentile calculations are based on T-Digest library.
Pinot is a multi-module project, with each module providing specific functionality that helps us to build services from a combination of modules. This helps keep clean interface contracts between different modules as well as reduce the overall executable size for individually deployable component.
Each module has a src/main/java
folder where the code resides and src/test/java
where the unit tests corresponding to the module’s code reside.
The following figure provides a high-level overview of the foundational Pinot modules.
pinot-common
provides classes common to Pinot components. Some key classes you will find here are:
config
: Definitions for various elements of Pinot’s table config.
metrics
: Definitions for base metrics provided by Controller, Broker and Server.
metadata
: Definitions of metadata stored in Zookeeper.
pql.parsers
: Code to compile PQL strings into corresponding AbstractSyntaxTrees (AST).
request
: Autogenerated thrift classes representing various parts of PQL requests.
response
: Definitions of response format returned by the Broker.
filesystem
: provides abstractions for working with segments
on local or remote filesystems. This module allows for users to plugin filesystems specific to their usecase. Extensions to the base PinotFS
should ideally be housed in their specific modules so as not pull in unnecessary dependencies for all users.
pinot-transport
module provides classes required to handle scatter-gather on Pinot Broker and netty wrapper classes used by Server to handle connections from Broker.
pinot-core
modules provides the core functionality of Pinot, specifically for handling segments, various index structures, query execution - filters, transformations, aggregations etc and support for realtime segments.
pinot-server
provides server specific functionality including server startup and REST APIs exposed by the server.
pinot-controller
houses all the controller specific functionality, including many cluster administration APIs, segment upload (for both offline and realtime), segment assignment, retention strategies etc.
pinot-broker
provides broker functionality that includes wiring the broker startup sequence, building broker routing tables, PQL request handling.
pinot-minion
provides functionality for running auxiliary/periodic tasks on a Pinot Cluster such as purging records for compliance with regulations like GDPR.
pinot-hadoop
provides classes for segment generation jobs using Hadoop infrastructure.
In addition to the core modules described above, Pinot code provides the following modules:
pinot-tools
: This module is a collection of many tools useful for setting up Pinot cluster, creating/updating segments.It also houses the Pinot quick start guide code.
pinot-perf
: This module has a collection of benchmark test code used to evaluate design options.
pinot-client-api
: This module houses the Java client API. See Executing queries via Java Client API for more info.
pinot-integration-tests
: This module holds integration tests that test functionality across multiple classes or components.
These tests typically do not rely on mocking and provide more end to end coverage for code.
pinot-hadoop-filesystem
and pinot-azure-filesystem
are module added to support extensions to Pinot filesystem. The functionality is broken down into modules of their own to avoid polluting the common modules with additional large libraries. These libraries bring in transitive dependencies of their own that can cause classpath conflicts at runtime. We would like to avoid this for the common usage of Pinot as much as possible.
To contribute to Pinot, please follow the instructions below.
Pinot uses git for source code management. If you are new to Git, it will be good to review basics of Git and a common tasks like managing branches and rebasing.
To limit the number of branches created on the Apache Pinot repository, we recommend that you create a fork by clicking on the fork button in this page. Read more about fork workflow here
Pinot is a Maven project and familiarity with Maven will help you work with Pinot code. If you are new to Maven, you can read about Maven here and get a quick overview here.
Run the following maven command to setup the project.
Import the project into your favorite IDE. Setup stylesheet according to your IDE. We have provided instructions for intellij and eclipse. If you are using other IDEs, please ensure you use stylesheet based on this.
To import the Pinot stylesheet this launch intellij and navigate to Preferences
(on Mac) or Settings
on Linux.
Navigate to Editor
-> Code Style
-> Java
Select Import Scheme
-> Intellij IDES code style XML
Choose codestyle-intellij.xml
from pinot/config
folder of your workspace. Click Apply.
To import the Pinot stylesheet this launch eclipse and navigate to Preferences
(on Mac) or Settings
on Linux.
Navigate to Java->Code Style->Formatter
Choose codestyle-eclipse.xml
from pinot/config folder
of your workspace. Click Apply.
Once the IDE is set up, you can run Batch QuickStart
for batch mode or Realtime QuickStart
for realtime mode.
Batch Quickstart
start all Pinot components (ZK, Controller, Server, Broker) in the same JVM
create Baseball Stats table
Go to localhost:9000 in your browser and play with the query console.
Realtime Quickstart
start all Pinot components (ZK, Controller, Server, Broker) in the same JVM
Start Kafka in the same JVM
create MeetUpRSVP table.
Live stream meetup events into Kafka
Go to localhost:9000 in your browser and play with the meetup RSVP table.