Several high-profile companies, as well as the UK government, recently pulled their business from Google’s advertising network. The move was in response to their ads appearing alongside extremist content, including videos of anti-Semitism, white nationalism, and others that promote terrorism.
Top advertising agencies, such as Havas, have also suspended their relationships with Google following those revelations. Numerous other companies chimed in, asking Google for answers on how their advertisements could become attached to such content.
In response, Google published a blog post in an attempt to remind people that their mission “has always been to make information universally accessible and useful”. The company also stated that almost 2 billion “bad ads” were removed from its systems last year, along with over 100,000 publishers.
The company, however, serves millions of websites and some 2 million publishers on AdSense. Automated controls generally work well but there are cases where they obviously fail. Google already provides some control over its ad content, including topic and site category exclusions for their publishers.
However, the company recognizes that there is a need for even greater control. As a result, Google will be reviewing its policies and will be making some changes in the upcoming weeks in order to provide companies and agencies with more control over their content.
The fact that ads from various companies have appeared alongside extremist content should not really come as a surprise to anyone. As mentioned before, Google does have some automated systems in place that aim to serve appropriate ads to everyone.
Google, however, does not make a distinction on the type of content served in its network. A few months ago, the company updated its ad policies to punish content found to be misrepresentative. What many understood then was that Google was addressing the issue of “fake news” which has been a major point of discussion these last few months, particularly around the US elections.
That, however, was not the company’s intention. Though Google did initially talk about “fake news”, it quickly distanced itself from the subject. Instead, it only wishes to control news that appear as misrepresentative, or news from misrepresented sources. Moreover, it does not wish to control news, whether legitimate or fake, from sources that represent their brand well.
For instance, an anti-Semitic website spreading Holocaust-denying news is a well-represented website because there is no case of mistaken identity. A website that purposefully spews out fake news while presenting itself as a real source of information, however, would come under Google’s scrutiny.
Whether companies like Google and Facebook should regulate content on their platforms or not has also been a major point of controversy. Usually, they attempt to present themselves as nothing more than tech platforms though that is not the case, at least not any more.
What’s obvious here, however, is that Google’s ad platform does need to provide more control to individual brands. Even if the company itself does not want to take part in politically-charged discussions, it should certainly allow companies to make individual decisions and accept the ramifications that come with that.