This article about the measurement window belongs to a special series of blogposts, written by our own data wizards. It will offer you a glimpse into the engine room of Mediasynced. In these informative blogposts, we shed a light on the complexity of TV performance measurement in realtime and our robust statistical solutions.
[edgtf_button size=”medium” type=”solid” text=”Download PDF” custom_class=”” icon_pack=”font_awesome” fa_icon=”” link=”https://mediasynced.com/wp-content/uploads/2017/12/Blogpost-serie-Measurement-window.pdf” target=”_blank” color=”#ffffff” hover_color=”” background_color=”#137ec5″ hover_background_color=”” border_color=”#137ec5″ hover_border_color=”” font_size=”” font_weight=”” margin=””] or read the full article below
The measured uplift of the overlapping part needs to be divided.
How long after seeing a commercial does an interested consumer take action? Do they start Googling before the commercial has ended or does it take a few minutes for them to process before they start taking action? What if a commercial has a tag on, do they respond differently then? Not only is this an interesting question from a psychological point of view but it is also a very relevant question to accurately measure the performance of each spot. The question we are interested in is, after how many minutes can we conclude that the immediate effect of a commercial has ended. Or in more technical terms, after how many minutes can we stop attributing the measured uplift to a commercial? If we measure too few minutes then we won’t capture the full effect of the commercial, there may have been another few minutes that contributed to the uplift of a commercial which will be missing in your model.Capturing too many minutes can result in similar inaccuracies, your model will start picking up regular variations in traffic or noise, instead of campaign effects. So we need to find the optimum measurement window for each commercial.
There are multiple ways of tackling this problem, simplest being to manually look at your data, visualise it and then manually set your response window to the value that corresponds best with your data. You can then use this value for all your campaigns and commercials. A potential problem with this method is that consumers react differently to different commercials and products. The response curve for a beverage commercial is likely to differ significantly from that of a car. Or the response curve for a brand campaign is different than that of a response campaign. Even the device used (e.g. mobile or laptop) will have an impact on the response curve. For this reason we constructed a dynamic response window. By looking at the correlation between the different minutes (1st,2nd,… ,10th minute) we can measure when the curve seems to come to an end. We stop measuring when the minutes start to be more dominated by noise than the effect of the commercial. This way we ensure that we always set a balanced and dynamic campaign window for each campaign.