Many of the datasets are time series in nature, tracking state change of an entity over time. The granularity of recorded data points might be sparse or the events could be missing due to network and other device issues in the IOT environment. But analytics applications which are tracking the state change of these entities over time, might be querying for values at lower granularity than the metric interval.
Here is the sample data set tracking the status of parking lots in parking space.
lotId | event_time | is_occupied |
---|---|---|
We want to find out the total number of parking lots that are occupied over a period of time which would be a common use case for a company that manages parking spaces.
Let us take 30 minutes' time bucket as an example:
timeBucket/lotId | P1 | P2 | P3 |
---|---|---|---|
If you look at the above table, you will see a lot of missing data for parking lots inside the time buckets. In order to calculate the number of occupied park lots per time bucket, we need gap fill the missing data.
There are two ways of gap filling the data: FILL_PREVIOUS_VALUE and FILL_DEFAULT_VALUE.
FILL_PREVIOUS_VALUE means the missing data will be filled with the previous value for the specific entity, in this case, park lot, if the previous value exists. Otherwise, it will be filled with the default value.
FILL_DEFAULT_VALUE means that the missing data will be filled with the default value. For numeric column, the defaul value is 0. For Boolean column type, the default value is false. For TimeStamp, it is January 1, 1970, 00:00:00 GMT. For STRING, JSON and BYTES, it is empty String. For Array type of column, it is empty array.
We will leverage the following the query to calculate the total occupied parking lots per time bucket.
The most nested sql will convert the raw event table to the following table.
The second most nested sql will gap fill the returned data as following:
The outermost query will aggregate the gapfilled data as follows:
There is one assumption we made here that the raw data is sorted by the timestamp. The Gapfill and Post-Gapfill Aggregation will not sort the data.
The above example just shows the use case where the three steps happen:
The raw data will be aggregated;
The aggregated data will be gapfilled;
The gapfilled data will be aggregated.
There are three more scenarios we can support.
If we want to gapfill the missing data per half an hour time bucket, here is the query:
At first the raw data will be transformed as follows:
Then it will be gapfilled as follows:
The nested sql will convert the raw event table to the following table.
The outer sql will gap fill the returned data as following:
The raw data will be transformed as following at first:
The transformed data will be gap filled as follows:
The aggregation will generate the following table:
lotId | event_time | is_occupied |
---|---|---|
timeBucket/lotId | P1 | P2 | P3 |
---|---|---|---|
timeBucket | totalNumOfOccuppiedSlots |
---|---|
lotId | event_time | is_occupied |
---|---|---|
lotId | event_time | is_occupied |
---|---|---|
lotId | event_time | is_occupied |
---|---|---|
timeBucket/lotId | P1 | P2 | P3 |
---|---|---|---|
lotId | event_time | is_occupied |
---|---|---|
lotId | event_time | is_occupied |
---|---|---|
timeBucket | totalNumOfOccuppiedSlots |
---|---|