Google's Alphabet group is responding to criticism of offensive ads by tightening its ad policies. Last week, AT&T, Johnson & Johnson, and other companies, concerned about "brand safety," pulled ads from YouTube because they were appearing next to hate speech.
Bloomberg reports harsh words from industry leaders about Google and Facebook:
In a speech last week, Robert Thomson, Chief Executive Officer of News Corp., a frequent Google critic, said the two digital companies "have prospered mightily by peddling a flat earth philosophy that doesn't wish to distinguish between the fake and real because they make copious amounts of money from both."
Bloomberg also reports that Google has sophisticated artificial intelligence (AI) technology to fight "dangerous and derogatory" ads, but "[a]utomatically classifying entire videos, then flagging and filtering content is a more difficult, expensive research endeavor -- one that Google hasn't focused on much, until now." However, in the past two weeks, the company has flagged or disabled five times the number of videos than it had previously. Google Chief Business Officer Philipp Schindler is minimizing the issue, but he does admit it's a problem:
But it's five [times] on the smallest denominator you can imagine. Although it has historically it has been a very small, small problem. We can make it an even smaller, smaller, smaller problem.
These latest changes seem to be better received than ones introduced two weeks prior. At least after this announcement, no more advertisers have pulled ads-so far.
- Schindler's quote reminds me of former BP CEO Tony Hayward reference to the oil spill in the Gulf of Mexico as "tiny." Is this a fair comparison? What differences do you see in the two situations?
- Does Google's AI push give you more confidence in its ability to prevent offensive ads? Why or why not?
- Why didn't the company focus on this earlier, and why is it so actively working on this issue now?