Learning the news: Is Facebook to blame or is it the believers?

Russia targeted individuals with politically charged and divisive content on Facebook. But is Facebook responsible for those who believe “junk” news? asks Roger Plothow.

For about two years beginning in mid-2015, Facebook accepted ads from “inauthentic” accounts – likely Russian – worth about $100,000.

Perhaps particularly alarming to Idahoans is a new report from The Hill that Russians used Facebook to gin up an anti-Muslim protest in Twin Falls last August.

Should we care? Facebook says it does.

“We believe in protecting the integrity of civic discourse, and require advertisers on our platform to follow both our policies and all applicable laws,” wrote Alex Stamos, Facebook’s Chief Security Officer. “We also care deeply about the authenticity of the connections people make on our platform.”

He reports that Facebook found another $50,000-worth of “potentially politically related” advertising that “may have originated in Russia.” The advertising was generally simply fallacious and only occasionally political, but it’s probably safe to assume the Russians were up to no good.

His blog post goes on to offer a number of proposed solutions to the problem, including adopting a new policy in which “we will no longer allow pages that repeatedly share false news to advertise on Facebook.”

This, of course, suggests that the idea of not allowing pages that repeatedly share false new to advertise on Facebook hadn’t occurred to anyone at the company until recently. It also suggests that they knew it was happening and, at least for a time, did nothing other than take the advertising dollars.

Does anyone else wonder why, at some point during some meeting at FBHQ (that’s Facebook Headquarters), no one raised a hand and said, “Hey, there’s an election going on and we’re getting some weird stuff out of Russia. Anybody want to look into that?”

No, it took a complaint to the Federal Election Commission filed by the non-profit Common Cause to shake all of this loose.

No one expects every media platform to fully verify all of the information in all of its paid advertising. But if you know that some of your ads or pages “repeatedly share false news,” wouldn’t you just make them go away? This decision does not require particularly deep thinking.

There are a number of takeaways from Stamos’ blog post, but here’s one at the top of the list: Don’t trust social media platforms to clean up the content that appears on them. Ultimately, that responsibility lies with the consumer, not the platform provider. Maybe we should take the same level of responsibility for the information we consume as we do for the food we eat.

OK, that’s probably not the best example, but you get the idea. At the very least we understand that a diet of fried candy bars and soda by the barrel is eventually likely to do some harm to us physically. Consuming the information equivalent of junk food isn’t good for us, either. Facebook, Twitter, Snapchat and Instagram can’t be counted on to serve only what’s good for us.


Roger Plothow is editor and publisher of the Post Register. This is part of a weekly year-long series on media literacy.


ADVERTISEMENT