MOST of the modern information media promised months ago that their safeguards had been upgraded to block misleading statements masquerading as news.
However, social media giants Facebook, Google, YouTube and Twitter were slow to address problems with fictitious or inflammatory posts when the Las Vegas shooting tragedy burst into the news 10 days ago.
Once again, users of the tech platforms were sharing untruths and treating speculation as fact. As BuzzFeed's deputy global news director noted on Twitter, Google's "top stories" results at one point featured posts from the notorious 4chan forum speculating inaccurately about the identity of the Mandalay Bay shooter.
Moreover, a reporter for the New York Times documented how Facebook's trending stories highlighted news from Sputnik, a Russian propaganda site, and featured a false post asserting the FBI blamed the slaughter on Muslim terrorists.
At the same time, a "Las Vegas Shooting/Massacre" Facebook group sprung up and quickly grew to more than 5,000 members after the killings. It was run by Jonathan Lee Riches, identified by the Atlantic as a serial harasser with a criminal background and a history of farcical lawsuits.
All of this raises some doubt about whether Google and Facebook -- among the richest and most successful companies in global history -- can create foolproof algorithms that instantly evaluate what content is worth promoting and what content is best ignored in a time of crisis.
It also raises some questions about whether the two companies, which have spent vast sums on artificial intelligence research, can develop reliable ways to protect the public from being manipulated and misled.
Facebook said last week that it would hire an extra 1,000 people to help vet ads after it found a Russian agency bought ads meant to influence last year's election. It's also subjecting potentially sensitive ads, including political messages, to "human review."
Even after those new reviewers are hired and doing their jobs, consumers of online news will need to look closely at what is news and what is not.
Everyone needs to develop the tools for evaluating what content is credible, what is junk, and what absolutely needs confirmation before being shared. Otherwise, good and honest people might be guilty of spreading false or misleading news.