Editorial adjacency: one year on | Fieldfisher
Skip to main content
Publication

Editorial adjacency: one year on

11/07/2017

Locations

United Kingdom

In an article written last year entitled ‘Protecting against detrimental placement of content online’ for this publication, Tom Guida, Partner and Head of Media and Entertainment at Fieldfisher...

Digital Business Lawyer
June 2017
Tom Guida, Partner and Head of Media
Robert Grannells, Trainee

Tom highlighted the issues that could arise from the at best discordant, and at worst, harmful placement of content adjacent to advertising on a particular website. Tom also pointed out that although the standard form insertion orders tended to favour advertisers over content owners in the case of these conflicts, it behoved both to be much more forceful in insisting on legal protection. A year on, following the concerns that have arisen out of alleged reports of online advertising being placed next to extremist content and the withdrawal of advertising by a number of large brands from certain online platforms as a result, Tom and Robert Grannells, also of Fieldfisher, provide a follow-up article on the issues arising out of programmatic online advertising and what media agencies need to do to regain trust in ad placement online.

In early April 2017 large advertisers, largely in response to reports in the press that Google/YouTube was placing advertising next to ISIS and other terrorist content, began pulling hundreds of millions of dollars in online advertising from the publishers and ad networks that had served their online advertising for years. Some estimates have put this crisis as costing close to $750 million in revenue costs to Google in 2017 alone, with similar percentage impacts on other networks.


It would not be fair to criticise advertisers for their apparent slow response to what had actually been an issue for several years. The issue became extreme recently as the result of a significant increase in inappropriate advertising/editorial adjacency, as the result of:
1. Content being much more widely syndicated than expected by this point, and treated much more like advertising that is placed based on algorithms and programmatic platforms instead of curated and posted to a particular website;
2. User data and its analysis is much more robust, and website algorithms have grown sufficiently more complex so that there is less time to check an advertiser’s or content user’s editorial adjacency blacklist and override a user’s preference to see a particular piece of content next to a particular advertisement; and
3. Enhanced advertising formats make it much more difficult to screen and index advertising and content to allow adjacency filters to prevent improper association between the two.

None of this is surprising: programmatic advertising has gone from zero to almost 80% of the £3.3 billion spent on the online display advertising part of the market. There has also been a significant uptick in advertising spend across the entire market. In 2016, as a whole in the UK, ad spend grew by 3.7%, reaching £21.4 billion and marking it the seventh consecutive year of market growth. Digital formats advertising continues to dominate, with internet ad spend up 13.4% to £10.3 billion for the year. Previously advertisers could specifically choose the website or video or online platform on which their ads appeared - for example on a news website that also targets the particular target demographic for their ads. However with programmatic buying of advertising becoming the norm, information available to advertisers to target their ads shifted from being the specific website or content that they chose and had profiled, to profiling of a target audience via cookies and other elements of a digital breadcrumb trail, pieced together to figure out an individual’s interests, which could then be used to target ads.

Additionally, as highlighted in our previous article, there has been an insufficient uptake of contractual protections such as those in the 4A’s/IAB Standard Terms and Conditions Version for Internet Advertising for Media Buys One Year or Less Version 3.0 (the ‘Standard Terms’) which are used around the world to govern the placement of online advertising. The Standard Terms are advertiser friendly and do put in place powers for advertisers to insist on placement which is not next to potentially objectionable content, but this current crisis shows that they are still not good enough to use as an effective risk mitigant in situations like these.

This goes wrong when a violent or controversial or hateful video or content might be flagged as providing invaluable data on the audience, but provides nothing of the sort, and the advertiser is kept completely unaware of where their content is being placed. These violent videos or content could amass lots of views or page impressions, but usually from a small audience with similar interests. This skews the programmatic advertising to thinking that such content is prime target real estate for some types of advertising, based upon the interests of those who are watching or reading such potentially objectionable content, and the platforms naturally began to place and offer higher pricing for such advertising placement.

There was an instant reaction from the ad networks and the key players in the programmatic advertising industry. The deployment of third party verification of ad placement and better content matching systems was critical, both of which were heavily developed post the reaction from the industry. Advertisers felt that the best way to make the advertising networks listen to their concerns was by ‘voting with their pocket book’ and withdrawing their advertising spend. The advertising networks continued to develop their systems and recognised a key flaw which was previously not a problem. Search and related written content was largely unaffected by the issues as they could be easily categorised and processed programmatically.

Naturally clients of advertising agencies and advertisers themselves generally do not wish to be associated with these sorts of issues and content and would never usually choose to place their advertising adjacent or before such content.
An additional issue is that content guidelines are less than effective at encouraging compliant content. Advertisers had believed that content guidelines would help protect them, being long lists of content which could be considered inappropriate, including (but not limited to): sexually suggestive content; violence; inappropriate language; promotion of drugs; and ‘controversial or sensitive subjects and events,’ including ‘war, political conflicts, natural disasters and tragedies,’ however this had been evaded and not adhered to due to programmatic selection of advertising. This meant that there was an immediate reaction from companies to make sure their brands were not associated with this content, and as another result of the programmatic advertising model, selectively disabling this content from being advertising activated was difficult, as it was often user generated, and hard to programmatically categorise and recognise prior to advertising being placed.

This meant advertisers were left with little option but to pull their advertising from platforms where this was seen to be occurring. The British Government, Mercedes-Benz, Johnson & Johnson, Verizon, AT&T and Enterprise among others pulled their advertising from major platforms, fearful of the risk of a backlash from potential customers seeing their advertising placed next to questionable content.

The rise of new content types, in particular video, caused significant difficulties for content rating systems and policing of content, as they are much harder to classify and process automatically. This means they are much harder to police effectively without manual processing or through the collection of tagged information or metadata set by the content creator. This can obviously be spoofed or incorrect.

While there are policy level enforcement processes in place to try to prevent content creators from uploading or producing content on platforms that can be monetised via advertising, it is possible to circumvent these. Previously, nearly all video processing was done automatically, but this was limited to only scanning the video title, metadata and basic imagery to try to get a sense of how appropriate the video is. A crowdsourced approach to content was also used, relying on users to ‘flag up’ inappropriate content for review by human reviewers, which can then result in advertising being pulled from content, or the content itself being removed. Naturally every video cannot be checked for controversial content, as to do so would be a mammoth task: 300 hours of video are uploaded to YouTube every minute, which would require more than 50,000 full-time staff doing nothing but watching videos for eight hours a day. However, computer based processing of video is insufficient to tackle content which is of a difficult nature which humans would be able to process. This content often has a narrow audience, for example a piece of Britain First propaganda with fewer than 20,000 views, which means that it will never reach users who consider the content controversial, and would not be flagged for review, would still be classed as popular and potentially ripe for advertising, limiting the usefulness of such an approach.

Here unfortunately the platforms have eroded some of the trust which advertisers have placed in them. They have not carefully looked after advertisers’ interests by tackling this issue before it became one. There is a difficult balance to strike here between content creator’s freedom to create potentially difficult content, and creating content which is advertising friendly and safe for advertising to be put next to it. There is also the argument that not having a relatively permissive platform which allows for advertising placement on potentially difficult content means that the next viral video which attracts millions of views overnight, could otherwise be missed from valuable monetisation and advertising opportunities. Currently, programmatic systems are forcing filmmakers to appeal in order to reinstate their revenue streams, and while these systems do work to prevent the most offensive material from having inappropriate advertising placed adjacent, a careful balance must be struck between strict controls, and scaring off either advertisers, or content creators with large dedicated fan bases and with a lot of influence over their followers.

How to bring advertisers back
In our prior article we described contractual protections that allowed for liability to be allocated more fairly and gave content owners equal footing with the advertisers. But these legal protections were, by necessity, applicable after an inappropriate editorial adjacency. We think that the next phase of thinking in this regard needs to provide more affirmative control mechanisms for all involved in the ecosystem.

Media agencies that buy for their advertiser clients must push for more accountability, and for more powers over their advertising networks to be able to take down advertising for specific types of content. Currently options are an ‘all or nothing’ system, being a binary of termination options, but greater flexibility as to platforms and types of content activated will be critical for trust to be regained in the industry. Additional protections can be built into contracts to enable these additional rights, but also to provide for greater termination options for poor placement or where placement turns into a large issue. Alternatives to outright termination include forcing options for reimbursement or, borrowing an idea from the technology sector, a service credit style regime may be a good idea, being one that is not unfamiliar in the software industry and a good incentive before termination for increasing service levels or preventing issues such as these from reoccurring.

Better content guidelines such as those in the 4A’s/IAB Standard Terms should be encouraged and should be given the full force of contractual enforceability and remedies for failures to ensure that content adheres to it. Advertisers should also push for greater ‘advertiser-friendly content guidelines’ to help educate those looking to activate their content through advertising; such lists currently exist but are not standardised or widely publicised or carefully adhered to. The industry is developing in reaction to placement concerns by introducing more extensive human review of content matching, and deploying AI and neural networks to learn and better categorise and flag content that is inappropriate for advertising activations.

The deployment of such technology will take time, and will never be 100% accurate, but significant strides have already been made to classify videos, and to flag objectionable content and to disable ads from being placed on them. As a sign of how much importance tackling this problem has been given, Google has put in place software that is significantly more efficient and effective at recognising such content and has already classified about five times as many YouTube videos as unsafe for advertising than before in the same period. This AI or machine learning processing of content which is usually much harder to automatically process has introduced interesting liability issues, as it shifts the classification of content away from the content creator, and back to the platform which is employing this AI or machine learning for its advertising classification and tagging procedures. Additionally, issues such as what happens if a video is incorrectly tagged by an AI as being potentially offensive and the impact of this on monetisation opportunities for content creators, arise.

Programmatic advertising is here to stay but advertising networks must work hard as they are trying to do so, to recover confidence in advertising placement accuracy and must prevent problems from reoccurring. For the moment manual review and random checking of placement may be the only way accuracy can be guaranteed to a high enough standard where the industry regains confidence in programmatic advertising systems, but with the ubiquity of such advertising, and with it being the primary method of advertising placement, it is unlikely that advertisers will be able to move away from it, when it is, most of the time, highly accurate and effective. This situation is actually an example of how effective content analysis and advertising placement engines were, regardless of how insensitive such placement was, as they were correctly targeting popular content with specific types of users and parts of society.

The vast majority of placements are correct and accurate with only a few outliers causing issues. Employing a more cautious approach to programmatic placement, and enabling more control over placement along with options which allow advertisers greater options to restrict advertising placement will be key. The giants like Google and Facebook must continue to build out content identification systems and fight ‘fake news’ in order to make advertisers feel more comfortable again with placing their brands and their reputation back into the hands of automated decisions.

Overall therefore, one year on, advertisers must insist on greater contractual protections to be afforded to their content to prevent this from reoccurring. They must insist that content creators and ad platforms are more aware of the actual content their advertising could be being placed next to. Enhancing the structures already available is a good way of insisting that industry standards be adopted, and higher levels of protection for advertisers and content owners be accepted by the industry.

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE