Discovery and Definition of Requirements
We operate on a constant cycle of Build, Measure, Learn and use it to inform future iterations. — Build: design and develop the prototype — Measure: use data to see what results we have achieved — Learn: analysis of data to generate ideas to improve our next iteration
Therefore, it’s important for us to define the proper KPIs early on. When these are properly defined, we can break these down into the specific metrics so we can track the success of the project objectively. For an example of this, check out this post. Other metrics that we see as useful should also be tracked for reporting purposes.
The next step is less formalized, but you need to make sure that each of these metrics are trackable on a technical level - i.e. how will we record this data? Is it a regular event, does it require the e-commerce plugin, will we need to push these as custom events to the datalayer etc.
At this stage we should also clarify the reporting needs of the client. Answering each of the bullet points below will help us to define exactly what metrics are needed, and to ensure that they are accounted for in our proposal.
- How many dashboards are needed?
- What purpose does each dashboard serve?
- Who is the audience of each dashboard?
- Are there other BI tools used (e.g. Adobe Analytics, Tableau) that will require a connector or separate SDK?
Other considerations you will need to clarify with the client are:
- Deciding whether we will provide the proxy server or the client will. The default option is us, since it’s easier to coordinate internally than deal with the client’s IT department
- Whether there are any data storage concerns. This is relevant only a client’s data policy requires them to keep a copy of the data. Our recommendation is to store the data on GA’s servers.
This document sets out our assumptions and definitions of success in an easy to read format, giving us accountability and a reference sheet that everyone can agree on. It’s a way for us to make sure that we are tracking the data we need so that we can produce reports later on.
- An overview of the product or feature
- Benefits to Users
- Expected Change in Behaviour
- Metrics to measure changes
- Expected Business Outcomes
- Metrics to measure outcomes
This is where we document exactly what needs to be done in order to track the data that we need. The structure changes according to the type of implementation we need completed. But a basic structure follows below.
Starting off with a screenshot and short description of the screen or page to be tracked. If there are specific interactions to be tracked, they will be marked on the screenshot. If it’s an app, we will also specify the name that should be assigned to that screen.
If there are UI events to be tracked on the frontend, we break down the labelling conventions in a table like below, detailing the Category/Action/Label and the associated trigger actions. This way it’s easy to handover the set up for someone to implement in Google Tag Manager (GTM), or a frontend if needed.
If there are values that should be pushed to the DataLayer on a particular page/screen, we also detail it in a table so that it’s easy to handover to a dev for implementation.
We document this all in a wiki, so that it’s easily accessible to both clients and any others within the WCL team. It’s a living document, so we document changes that we make within a log and make note of the version number.
Working with the dev team to put our technical proposal into action. Ensuring that they understand the task, have everything they need to accomplish it, and following through with an ETA to make sure that we are sticking to the timeline.
For web, we try to minimize dev involvement by using GTM where possible to implement tracking. It’s more flexible, since we are able to change tags and triggers ourselves, and do not need a separate deployment to see changes.
For mobile, it isn’t possible to use GTM in most cases.
We do require frontend resource to implement data layers and set up the wrapper-mapper. The wrapper-mapper structure allows us to send platform agnostic data, which is combined with platform specific SDKs to allow us to send data simultaneously to several platforms (e.g. Google Analytics, WeChat Analytics, and Adobe Analytics) without having to implement specific tags for each.
Once implementation is complete, we test each part to make sure it’s been implemented exactly as per our Technical Proposal - labelling names match, triggers are working correctly, and nothing has been missed. We currently test this manually two different methods:
- GA real time reports - activate screens and events and check for the corresponding entry in the reports*
- GTM preview pane - activate events or trigger datalayer pushes, and check for the associated event or expected values
*If needed, you can filter by App Version to ensure you are viewing results from the staging version. You can do this by clicking on any of the app versions listed in the Real Time Screen report. Then at the end of the URL, change the numbers listed to the version that you need to view. For example, in the image below, we would change 5.3.1 at the end of the URL to 6.0-STAGING
During this time, we also set up any user funnels and goals for important business goals that require a conversion rate. If required, we can also set up real-time data dashboards so that clients can view the most relevant data as soon as we launch.
It’s also best practice to set up alerts in GA that will trigger according to your pre-defined rules. We use these as a way to let us know if something has gone wrong with our implementation before losing too much data.
Use the wiki to ensure that you have tested everything you briefed in. Before we release to production, we should be able to confidently say that everything is working.
Now the fun part - data will start flowing through to our platforms. The depth and requirements of any report that you need to do will depend on the client. In general, a simple dashboard with basic user and business metrics will be provided at a minimum.
We usually use Data Studio for these reports (example here) and keep it aligned with the brand’s design. Data should be easy to read - if unspecified, the target audience is executive level, so keep it simple.
Once we start producing recommendations, the cycle continues and we return to the first phase of discovery for the new feature, continually improving and refining.
Rather than immediately implementing our recommendations, a way to ease clients into the build-measure-learn cycle is to suggest A/B testing. This way, we can verify our assumptions before rolling it out to all users.
In order to understand which features to focus on, we can also use a prioritization scorecard. By scoring each feature on a number of factors (visibility, ease of implementation etc.), we can see which will have a high impact to effort ratio. An example can be seen here.