Find out how to observe and report KPI targets on a multi-series dataset
Working with cross-organizational enterprise groups on strategic measures improvement, we seen one widespread analytical requirement in all areas. And that is: monitoring and reporting the KPI targets.
If we concentrate on monitoring, the principle job is monitoring the targets towards the precise values. With this stated, the targets ought to first be outlined.
Nonetheless, defining the targets on a multi-series dataset just isn’t all the time simple for enterprise customers. In different phrases, the complexity rises with every new sequence within the dataset.
We are going to share one instance.
Let’s say we wish to observe the income
KPI for 40+ retailers. To take action, we have to outline the higher and decrease limits of the anticipated income
per store degree, i.e. 80+ targets. And if we wish to observe the income
per store and nation degree, we will calculate the variety of wanted targets with the formulation: number_of_shops x number_of_countries x 2 (higher and decrease goal)
.
To beat the complexity of defining the KPI targets on the enterprise aspect, our Knowledge and Analytics workforce has developed two analytical approaches:
- BigQuery ML strategy on store degree: used for setting the targets of steady measures like
income
oradvertising and marketing prices
. - The hard-coded strategy in Looker on shop-group degree: used for setting the targets of the ratio measures, like
price income ratio
orreturn on advert spend
.
Let’s begin with a proof of the event levels. 🙂
Three fundamental levels outline the event framework:
- Stage 1: Defining the KPI targets
- Stage 2: Knowledge modelling
- Stage 3: Insights presentation
As well as, every stage is split into two steps, as offered within the picture beneath.
Within the improvement half, we’ll use two completely different datasets to indicate the implementation of the above-presented steps:
- (1) the associated fee advertising and marketing dataset, and
- (2) the store order dataset.
From these two datasets, we’ll calculate the next KPIs:
advertising and marketing prices
— the prices of all site visitors channels (e.g. Google, Fb, and many others.) per store,price income ratio (CRR)
— the ratio of selling prices and income per store,return on advert spend (ROAS)
— inverse CRR, i.e. the ratio of income and advertising and marketing prices per store.
As talked about within the introduction, the intention is to indicate the way to monitor and report the strategic KPIs with two distinct approaches.
So, let’s begin with the hands-on half in Looker. 🙂
Step 1/2: BigQuery ML strategy: setting targets for the advertising and marketing prices
We are going to use the BigQuery ML (BQML) time sequence mannequin ARIMA_PLUS
to seek out the advertising and marketing prices
targets.
Typically talking, to coach the BQML a number of time-series mannequin, we have to outline the next [1]:
model_type
= sort of the time-series mannequin, i.e.ARIMA_PLUS
time_series_data_col
= the goal (prediction) variable =advertising and marketing prices
time_series_timestamp_col
= the timestamp variable = log date of the associated feetime_series_id_col
= the sequence variable =store
horizon
= prediction horizon, i.e. how far sooner or later we wish to predictdata_frequency
= timeframe frequency, i.e. every day, month-to-month, weekly, and many others.
After explaining the mannequin structure, we will place the BQML mannequin in a single LookML view:
The outcomes of the mannequin are saved to your Looker challenge’s predefined database (verify Admin part → Database → Connections).
— — — — — — — — — — — — — — — — — — — — — — — — — — — —
Step 2/2: The hard-coded strategy in Looker: setting the targets for the price income ratio (CRR)
and return on advert spend (ROAS)
This step is easy as the duty is to:
- group the retailers into store teams, i.e. Class/Model/Efficiency store group,
- set the hard-coded targets for
CRR
andROAS
KPIs per every store group.
For this, we’ll use the manifest
file within the LookML layer and create constants for every store group and constants for CRR
and ROAS
focused values per store group.
An instance for creating the constants is proven within the picture beneath, and the entire manifest code may be discovered HERE.
Wonderful. Now we will swap to Stage 2 and create an information mannequin to merge the outcomes from the steps developed on this stage.
Now comes the neat half, i.e. knowledge modelling half.
Step 1/2: LookML modelling: Combining the leads to one view
This step contains the event of 1 derived desk that could be a product of the UNION
of the three queries:
- a question that retrieves the goal values of
advertising and marketing prices
per store (BQML mannequin outcomes), - a question that retrieves the precise values of
advertising and marketing prices
per store (calculated from the associated fee advertising and marketing dataset), and - a question that retrieves the precise values of
income
per store (calculated from the store order dataset).
As you’ll be able to see from the above view, by calculating the advertising and marketing prices
and income
per store, we had been capable of get the precise CRR
and ROAS
values too.
As well as, we in contrast the precise values to the goal values for these two KPIs.
The final half left on this stage is creating an information mannequin on prime of this view.
— — — — — — — — — — — — — — — — — — — — — — — — — — — —
Step 2/2: LookML modelling: Creating an information mannequin on prime of the view created in Step 1
Voila! We efficiently concluded the info modelling half and may now transfer to the final stage.
We will lastly current the info insights, i.e. our three KPIs, in visible kind.
Step 1/2: Looker: Create a management chart to trace advertising and marketing prices
Tutorial:
- Choose filters:
Advertising Store Sorts
,Order Date
,Store Model
. - Choose dimensions and measures:
Order date
,PM Prices Predicted [€]
,PM Prices Decrease Certain[€]
,PM Prices Higher Certain [€]
,Above/Under Predicted price worth [%]
. - Choose the
Line
chart sort to visualise the outcomes.
Performance:
- with the above-described setup, we will observe
advertising and marketing prices
every day for every store within the dataset.
— — — — — — — — — — — — — — — — — — — — — — — — — — — —
Step 2/2: Looker: Create a management desk to trace CRR
and ROAS
KPIs
Tutorial:
- Choose filters:
Advertising Store Sorts
,Order Date
,Store Model
. - Choose dimensions and measures:
Order date
,Advertising Store Sorts
ORStore Model
,CRR Precise [%]
,CRR Goal [%]
,ROAS Precise
,ROAS Goal
,Above/Under predicted CRR worth [%]
,Above/Under predicted ROAS worth [%]
. - Choose the
Desk (Report)
visualization sort to current the outcomes.
Performance:
- with the management desk, we will observe the
CRR
andROAS
KPIs every day per outlined store group.
A brief recap of the implementation levels offered on this article is as follows:
- First, we offered two distinct approaches for outlining the KPI targets for multi-series datasets; BigQuery ML and the hard-coded strategy in Looker.
- Second, we confirmed the way to mix the precise and goal KPI values within the knowledge modelling stage.
- Lastly, we created a Looker tutorial for making a management chart/management desk and presenting the knowledge insights visually.
With these abstract notes, we’re finishing our implementation journey for monitoring and reporting the KPI targets. Glad coding! 🙂