Digital attribution modelling is the process whereby credit for a user action online (usually the making of a purchase) is shared among the various online and mobile advertising channels that may have been used to promote that action to the user. These channels could include (among others):
- Social Media
- Search (Organic and Paid)
- Youtube (and similar video sites)
- Affiliate marketing
- Email marketing
- Web pages
User interactions (touchpoints) are usually captured with cookies on user machines although it is becoming increasingly common to log activity by means of monitoring logged-in users. Apple, Google and Microsoft as software providers as well as advertisers also allocate unique user IDs to their users in order to track their interactions across multiple devices. Each interaction for each user is logged and stored for analysis. The list of touchpoints for a particular user is sometimes referred to as “the engagement stack”. The engagement stack will record each of the interactions with the above channel and the order in which the user encounters them including, for example, the exact sequence of website pages that you browsed between before making a purchase.
The holy grail of attribution is to accurately describe the path taken by a user in terms of how each touchpoint contributed to the eventual action taken. From this information the media agency could tweak specific characteristics of each piece of content and attempt to direct more users down the same ineluctable route to purchase. Unfortunately, it is likely that this goal will forever remain out of reach. The number of steps in the path, the number of options, the different habits of users, etc. has rendered the problem almost intractable. There have been attempts to use path analysis to try and determine a common route through the digital environment but it has been found that even the most common path is only taken by around 1% of users.
The most common approach is not to try and work out a causal sequence but to give credit to interactions based on where in the journey they occurred. The cost of each interaction (if it is part of a media campaign) can be assigned and a cost-effectiveness metric calculated.
Originally, this type of digital attribution was predominantly “last click” whereby all credit for a purchase (or registration or other action) was simply given to the last interaction that the user made. The greatest benefit of this approach is its simplicity but it was quickly realised that, by ignoring the role of other interactions in the user journey, the analysis was misleading and resulted in poor recommendations for allocating budget in the future. While still commonplace, this type of attribution is seldom used by experienced practitioners. More sophisticated approaches have been used with the credit shared across the multiple touchpoints; for example, time decay attribution shares credit across all touchpoints but weighted to assign greater credit to more recent interactions.
It may be that there is actually little difference between the performance of many of the different campaign activities but there will always be spurious effects resulting from random activity which may distort the analysis. These effects may be less prominent if the data analysed is from a longer period but variation in the planned activity will affect the results while the larger amount of data will make analysis costlier and more time-consuming.
A weakness in the approach is that it usually neglects the impact of off-line media, e.g., TV, radio, outdoor. For example, if a person searches for a product in response to a TV advert and clicks on a sponsored link then the search will usually be given the credit for any resultant purchases. Part of the reason for this is that the data for online and offline data is usually stored in different systems and it is hard to merge them together but also the lack of definitiveness that an individual saw a piece of offline advertising is at odds with the personally identifiable nature of digital media.
Within channels, in particular digital display, there is often an optimisation system to attempt to maximise the effectiveness of the media channel. Because adverts are targeted at particular types of people (by demographic or other identifiable features from cookies / registration data), the demand-side platform constantly monitors the success rates of displayed adverts and can automatically adjust its bidding strategy.
Digital attribution is usually used to adjust future campaigns by identifying which properties of the media appeared to be most effective in terms of conversion. These properties typically include:
- Frequency / Recency (How often an identified user should be targeted)
- Inventory (Which sites / publishers / networks are used to deliver the content)
- Targeted users (User profile, user behaviour, re-targeting individuals)
- Geo-targeting (Show the ad / alter the content based on user location)
- Creatives (What creative is most effective for a particular audience)
- Formats (Size / dimensions / location of advert on particular page)
- Bids / budget allocation (What it is worth paying to target a particular user)
- Ad-serving times (What times of day are more effective)
The data is analysed across the full population in order to try and improve the rules used for the future campaign delivery. It is common for monthly reports to be generated and used to tune the campaign. However, each monthly report might be quite different to the others.
There are two possible explanations for these differences:
- User behaviour is extremely fluid and changes on a frequent basis.
- The analysis is over-sensitive to random fluctuations in the data implying changing behaviour when it is not actually present.
Let’s consider the first option and assume that there are significant changes in how people behave on the internet from day-to-day and over longer periods. There clearly are long-term movements, such as the move from portals like Yahoo to search engines like Google, or the emergence and consolidation of social media sites. In the short-term, a particular piece of viral content can result in traffic spiking on a particular site but this is usually concentrated over a relatively brief period.
The long-term trends are of little importance to frequent attribution analyses. Almost by definition, nothing noticeably changes from one month to the next among these factors. On the other hand, short-term spikes are guaranteed to be noticed. But these sudden fads are unpredictable and hard to repeat which means that they have a disproportionate impact on any analysis. The question that should be asked is not “What sites had high traffic / conversion last month?” but “What sites will have high traffic / conversion next month?”
If the answer is that there is no way to tell, then why assume there is and assume that an algorithm will pick it up? In fact, these assumptions may be detrimental to the overall goal of optimising the budget. If a particular site has a random spike in traffic, then it will not only be noticed by your analytics but by other companies’ analytics. This could drive up the cost of advertising on that site in future but without any guarantee of receiving the same number of views. In stock-market terms, it would be like buying at the top of the market without any guarantee of future performance. Meanwhile, another site that had a random drop in traffic may become cheaper but, crucially, just as likely to experience a random positive fluctuation as any other site. It can therefore be argued that, even if the first explanation is correct, it provides little information that can be used effectively.
Even more unlikely is the notion that users’ preferences for the format of a display ad might vary significantly over time. Or that the type of person that is likely to buy your product changes radically from one month to the next. So why should investment decisions hinge on changes in behaviours that likely aren’t changing? It is much more likely that the same people act in the same way as they always have, with the same general preferences for sites, formats, etc.
The Sandtable approach
Sandtable builds models based on theories of human behaviour. Through deep analysis, we seek to identify the motivations and influences on consumers. This approach rests on the fundamental principle that most aspects of human behaviour are slow to change and that most people do not radically change how they make purchase decisions from month to month. Rather than constantly updating targeting decisions in a short-term tactical manner, it would be better to put greater effort into understanding consumers and making long-term strategic decisions.
It is, of course, likely that gaining this understanding will rely on many of the same test-and-learn strategies as are used for optimising digital media already. Do consumers respond better to message A, B, or C? Do consumers that come direct spend more on a website than those who arrive via a display advert? What parts of a website are most relevant to users when deciding what to buy? The key difference is that these experiments will be used to support or refute an underlying theory of behaviour rather than simply being accepted as this month’s “truth” and being used as a basis to radically re-allocate budget this month, only to be re-allocated back again next month. When results are unclear it identifies a need for more data or new types of test rather than basing decisions on fractions of percentage points. Rather than short-term tweaks which may temporarily increase performance, deeper insight can lead to sustained and continual improvement as well as supporting decisions for other media channels and in the wider business.
Building up understanding of behaviour allows new opportunities to be identified and acted upon rationally rather than being the slave of multiple competing algorithms.
Comments are closed.
- February 2019
- December 2018
- August 2018
- April 2018
- February 2018
- September 2017
- August 2017
- July 2017
- June 2017
- May 2017
- April 2017
- March 2017
- December 2016
- November 2016
- October 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- January 2016
- December 2015
- November 2015
- October 2015
- September 2015
- June 2015
- May 2015
- April 2015
- March 2015
- September 2014
- August 2014
- June 2014
- May 2014
- April 2014
- March 2014
- November 2013
- September 2013
- June 2013
- May 2013
- September 2012
- June 2012
- May 2012
- April 2012
- March 2012