Paper Title: Looking Ahead to the Blurred Line of Truth and Fiction: Is Blockchain the Answer to Fake News?
Date of Paper Submission: April 23, 2018
I wrote this paper in law school as part of a course designed to question how the world would change in the next five years and how policy can be developed to help foster a safe and innovative future.
This paper has not been updated since its submission, so please keep in mind that there is now much more information available on the Cambridge Analytica/Facebook scandal and how it impacted democracy around the world. Also, it appears that the Decentralized News Network is no longer operational, so I would instead ask that you consider the concept of using blockchain to track information rather than the specific example provided in this paper. As a more recent example of how blockchain can be used to help combat misinformation, a company is now using blockchain as a way to track evidence of war crimes committed in the Russia-Ukraine war and store that evidence in a way that cannot be deleted.
I hope that this paper can give you an introduction on the dangers of fake news and introduce you to a potential solution: blockchain. I didn’t know it at the time, but I was effectively researching what became known as Web3. Web3 takes the current internet (Web2) and adds a blockchain layer to it to provide for a system that is transparent and immutable (unchangeable). The technology that I was describing as a method of tracking written and visual information online became what is known as NFTs (non-fungible tokens).
NFTs and blockchain are technologies that have developed massively over the last four years. If you are looking for more information on how NFTs and blockchain work, please see my intro article here. If you are looking for more information on Web3, please see my FAQ.
Introduction
Can you believe that, on April 17th, President Obama gave an address saying, “President Trump is a total and complete dipsh*t”?[1] Well, you shouldn’t. This video was part of a campaign for raising awareness about fake news, demonstrating how modern technology can make it look as though anyone is saying anything.[2] While this video was made in jest, imagine what could be done with malicious intent. Further, imagine if this fake news could be targeted at individuals to manipulate the population at large. This is the world we live in today.
How can we stop the rampant spreading of fake news when it is so difficult to tell what is true and what is false? Blockchain has been suggested as a technology that can immutably record and decentralize authorship to prevent fraud and falsities from being spread throughout the world. But blockchain on its own is likely insufficient to stop the spread of fake news. Initiatives such as reducing fake news sponsorship by advertising companies, educating people about the dangers of fake news, and pairing social media with reputable, blockchain-based news sources may finally make a dent against fake news.
Big Data and Targeted News
Consumers’ use of smart phones, laptops, wearables, and other data-collecting devices generate untold amounts of data every day. In 2017, more personal data was generated than ever before.[3] With this “Big Data”, it is possible for social media to provide targeted news based on consumers’ interests and ideologies. Placing this data in the hands of any one party can provide them with the power to influence world events on a scale not seen before. Fake news is an example of such data manipulation. Fake news contains information that is itself fabricated, with little to no verifiable fact.[4]
Facebook generates data based on its users’ interests and actions. Recently, it was discovered that Cambridge Analytica obtained the data of at least 87 million Facebook users.[5] The subsequent public and governmental outrage brought Mark Zuckerberg before the House of Congress and the Senate to account for Facebook’s possible misuse of data. This particular instance brought about seemingly unprecedented outrage; many consumers sign away the rights to most of their data without much thought, so what was different about this mistake? @ZackBornstein tweeted the answer while Zuckerberg testified in the House: “you wanted a faster way to rank girls by looks and ended up installing a fascist government in the most powerful country on earth.”[6] Consumers and governments are finally starting to realize the implications Big Data may have on their lives, such as possibly swaying the 2016 Election.
Users did not feel as though their privacy was being violated when they agreed to company’s terms and conditions. Analytics companies like Cambridge Analytica are showing consumers the power of Big Data. While Cambridge Analytica denies it, there have been allegations that the data gleaned from Facebook was used to generate targeted advertising and news based on psychographic profiles.[7] A psychographic profile describes a consumer based on psychological attributes like personality, interests, and lifestyles.[8] Targeted news could suggest stories, often containing false information, catered to Facebook users’ particular interests, thereby encouraging them to vote in elections a certain way.
Progression of Fake News to a Modern Society
Fake news is not a new phenomenon: in the past, States would send messengers carrying false information to rival cities to sow discord.[9] Yellow journalism was used to catch readers’ eyes in newspapers, even though the headlines did not contain true information.[10] Campaigns have often analyzed people’s responses to different strategies and changed their plans accordingly. Why is modern data manipulation and fake news such a problem?
With today’s social media, the dissemination of news is almost instantaneous. Facebook and Twitter users can share stories posted on their newsfeed to hundreds or thousands of people, who then spread the stories even further. With the ability to disseminate news so easily there is not enough time to fact-check the information before it spreads throughout the country or the world. Even if Cambridge Analytica did not use their Facebook-based psychographic profiles to sway the 2016 election, it may still manipulate people in the future.
Social media has become a source of news for many people. In 2015, a study determined that 61% of Millennials looked to Facebook for news.[11] When a user likes a post, Facebook notes the user’s interest and will suggest similar posts in the future. This concept is known as a filter bubble.[12] When filter bubbles are applied to fake news, users see similar content, thereby providing a possible confirmation bias that the news they are receiving is true (e.g., “I’ve seen this story eight times now it must be real”). Distinguishing between real and fake news becomes very difficult if there is no time to fact-check information.
Professor Hany Farid from Dartmouth suggests: “We’re decades away from having forensic technology that … [could] conclusively tell a real from a fake.”[13] If we are truly decades away from being able to disassociate real from fake news, how can we trust the credibility of anything we see online? The problem arises not when fake news is generated for amusement, but rather when someone has malicious intent. For example, imagine a manipulated video depicting President Trump declaring nuclear war on North Korea. How would North Korea act in response? If nuclear missiles were indeed launched, North Korea would only have minutes to decide on a course of action. The credibility of news in the modern, instantaneous day needs to be airtight. Not only must there be a method to determine the credibility of news, but this method must be able to almost instantaneously determine the validity of the information due to the rapid dissemination of information.
Current Measures for Combating Fake News
Considering the allegations made towards Facebook in the past few years, Facebook has taken measures to improve the truthfulness of content shared on their platform. Facebook users can flag content they think is false. Flagged content is outsourced to third party fact-checking organizations. If the third party finds the content to be fake or misleading, they will inform Facebook, and a warning label will be posted along with the article whenever it is shared.[14] If a content-provider is known for providing false information, similar warnings will be posted on Facebook.[15] However, to circumvent third-party fact-checking, fake news companies buy legitimate Facebook accounts from people, and use them to propagate fake news.[16] By the time Facebook realizes the user’s account has been compromised, fake news may have spread to hundreds of thousands of people. There needs to be a better system to combat fake news.
Blockchain as a Possible Solution
Blockchain has been hailed as the saviour of the fake news problem.[17] Blockchain is a technology that may be overlaid onto many different types of software. Blockchain provides an immutable record of transactions.[18] Verification of a transaction adds the transaction “block” to the “chain” of previous transactions. Each blockchain infrastructure will have its own method of verifying transactions depending on its desired use. Reputation-based blockchains typically use a proof-of-stake concept. Proof-of-stake allows a limited number of random users to bid for a chance to verify the transaction.[19] Depending on the application, one or more of these random users will confirm the validity of the block, and it will be added to the chain. The validators will earn money/tokens/points according to their contribution and their initial stake.[20]
There have been two types of reputation-based blockchain software proposed: the first uses hive-mind type approval of news while the second uses a finite subset of reviewers. The hive-mind reputation system is based on the premise that the collective wisdom of the crowd will more easily discern true content than having a small number of experts verifying the validity of the content.[21] Effectively, users vote on whether they think an article is true. Voting along with the majority gives the user higher reputation while approved articles increase the reputation of the author. Blockchain is then used to track the users, authors, and their respective reputation to ensure there is no identity fraud.
The second reputation system works like proof-of-stake verification, where a finite number of reviewers are chosen to validate content. Decentralized News Network (“DNN”) is an example of blockchain-based software that uses the subset-reviewer method. DNN makes use of three categories: readers, writers, and reviewers.[22] Writers submit content to DNN to be fact-checked by the reviewers before being given to the readers. A token-based system provides a “currency” incentive for the different groups to remain active in DNN.
Reviewers stake DNN tokens for the chance to review a writer’s submission. There is a cap to which the reviewers can bid to increase the pool of highest bidders. From amongst the top bids, seven random reviewers are chosen, and the reviewers fact-check the content and vote on whether the submission is worthy of being posted.[23] If there is a 4-3 split, the submission is sent to another reviewing panel. There are super-reviewers that can see reviewers’ past voting behaviour.[24] Reviewing past behaviour is important for identifying bots and preventing them from being used to reduce the credibility of DNN. The reviewers get paid out according to their reputation level. Writers’ reputations are based on the amount of times their articles are accepted or rejected, whereas reviewers’ reputations are based on the number of times they voted with the majority and their bid return.
The DNN Content Policy is made up of three rules: verifiability, no unsourced content, and faithfulness to sources.[25] The first two rules indicate that all definitive statements must be sourced and must match their source. Faithfulness to sources means the writing does not have editorial bias and the content is represented accurately. If an author would like to editorialize, they must include their thoughts in a separate, labelled section.[26]
Analysis of the Blockchain Solution
While blockchain’s immutable record-keeping prevents malicious users from harnessing the other users’ reputation, blockchain infrastructure still contains flaws that could undermine the system. There is nothing stopping reviewers from voting randomly or from encouraging fake news. Thomas Schelling, a Nobel Prize winning Economist has suggested “a person will tend to choose an option, in the complete absence of any line of communication with a collaborator, that has some significant value or appears more natural or logical then the other option.”[27] Thus, when the parties have something at stake (reputation and/or bids) they will be more likely to vote with the most logical, truthful, answer. DNN’s encouragement of diligent review is predicated on the following breakdown:
- Reviewers stake a bid to try and get a review;
- The more at stake, the greater likelihood they will be selected to review a work;
- DNN’s guidelines provide a system of logic for the reviewers to rely on for determining truth;
- Not voting with the majority (the logical approach) = loss of stake and reputation;
- Therefore, it would be expensive to vote disinterestedly or incorrectly.
Another caveat is that users may collude with one another to achieve their desired outcome.[28] If enough users in the hive-mind system vote a particular way, they could propagate fake news. The users need not be human; Artificial Intelligence (“AI”) bots can be used to spam stories to ensure the majority always votes one way. Even if the bots reach such a low reputation that they are removed based on their past behaviour, new accounts could be continuously created to replace the removed bots, and by then it would be too late to stop news that had already spread. Unless there is a more powerful AI system that can pre-emptively remove bots before they vote, I think this problem would be unsolvable for the hive-mind method. However, DNN’s method may be sufficient to prevent bots. Reviewer randomness reduces the likelihood of having multiple bots in a single transaction. Provided there are enough honest users (a big assumption in the early stages of the program), each transaction would likely be safe from bot interference. Further, the effort required to review an article makes it difficult for bots to determine what the voting majority will be. Also, users must put their tokens at stake to review, so it would be expensive for massive amounts of bots to try and sway the system.
The next problem deals with the nature of news itself: news is not necessarily objective. DNN reviewers may be easily confused about the truth of articles because the content remains in a grey zone comprising mixed truth and lies, or mixed fact and opinion. A solution may be to have an alternative category for articles involving ambiguous or unproven truth; such a category could be labelled as ‘uncertain’ to ensure readers do their due diligence before accepting the truth of the article. Similarly, a satirical category could be created to prevent valid content from being removed. Satire tells truth through falsehood, a concept that may be difficult to ascertain without sufficient context.
Political bias may also present difficulties for DNN reviewers. DNN’s whitepaper suggests authors must separate their own opinions from the rest of their work, but this separation does not account for inherently biased written works. For example, newspapers are typically known for leaning politically right or left. Readers often take this bias into account when determining the truth of the articles. Readers also enjoy articles with differing political views to better understand the political field as a whole. DNN could include a filter for sorting authors into their political stances. Similar to Facebook’s warning of questionable content, DNN could provide warnings that certain authors appear politically biased.
The next flaw deals with the issue of time. The time taken for review of DNN is not negligible. The first news source to publish a breaking story will likely receive the most attention, and therefore have the highest likelihood of earning money due to consumer consumption (buying hard copies, clickable ad revenue, etc.). Consumers want their news content as soon as possible, which is why newspapers have fallen to the wayside. Newspapers’ next-day news is too slow for today’s instantaneous society. DNN may also be too slow due to the time taken for review. Consumers will be able to receive their news from less reputable sources faster than through DNN because they do not need to go through a validation process. By the time DNN is ready to submit an article to the readers, a fake story may have been published through a different infrastructure and spread around the world on social media. Therefore, I think that while the idea behind a reputation blockchain system such as DNN is admirable, it cannot be the panacea to fake news some believe it to be. More needs to be done, perhaps in conjunction with DNN, to combat fake news.
Additional Steps in the Fight Against Fake News
DNN may provide infrastructure for users to access news that is most likely truthful, but as stated earlier, most people in younger generations receive their news through social media such as Facebook and Twitter. DNN does not solve the problem of social media as its own news source; it merely provides an alternative to social media news.
DNN, or software like DNN, could partner with Facebook to create a dedicated news section for reputable content. However, Facebook users often share news they know to be false, for amusement rather than the credibility of the news.[29] Further, 59% of links are shared without anyone clicking on the article.[30] The problem therefore arises when people read news initially shared for amusement but believe the information it contains. The spreading of sensational news regardless of its content may stymie DNN’s attempt to provide a platform of reputable content because many people do not care about the truth of the news. Therefore, the culture of news on social media itself may be the problem.
Social media regulation is an ongoing issue in the United States.[31] Care must be taken to ensure that if regulated to censor news, Facebook does not violate freedom of speech. Technology is not currently at a stage where it can effectively prevent fake news and thus the only true solution would be full censorship of social media by removing the ability to share news. This solution is not acceptable due to freedom of speech concerns, though Sri Lanka has recently taken this route and banned Facebook and Instagram to stop hate speech.[32]
Short of outright censorship, a solution may be to disincentivize the sponsorship of fake news production by removing advertising revenue. For example, in Macedonia there are hundreds of people generating fake news in rumour mills for the advertising revenue because it is more profitable than traditional employment.[33] To stop advertising revenue, advertisers could be penalized for their association with fake news websites. States could fine advertisers sponsoring fake news. Facebook could refuse to advertise companies sponsoring fake news and remove their content from Facebook. While this solution would not prevent actors with a particular aim in mind or who revel in sowing chaos (e.g., the Russian government), it would discourage those who generate fake news purely for the revenue.
Locating fake news rumour mills might not be enough to stop the flow of fake news. Social media knows no physical boundaries and is thus an international problem. Facebook could partner with local governments to track fake news posted to Facebook. Governments could limit the advertising available to these rumour mills. However, why should Macedonia care that fake news is sowing discord in the US, if they are receiving advertising revenue? Perhaps an international coalition could be formed to provide regulation for penalizing States knowingly harbouring rumour mills. Even if such a coalition were to reduce fake-news-for-profit, there would undoubtedly be some fake news sources that remain. The line between truth and fiction may be blurry, and penalizing advertisers based on their association with fake news may be disproportionate considering some fake news sources may merely be the result of bad journalism. Care must be taken not to encroach on people’s right to free speech.
The measures listed above attempt to stop fake news at its source, but what about stopping fake news at its end? Fake news is only a problem because readers allow it to be a problem. Consumers read stories that they believe to be true, sometimes despite signs that the information is false. The best solution to this problem is education. Facebook instituted an “educational tool” that gave users information for determining if an article was fake, but this tool only lasted a few days before being removed.[34] Regulation could require social media to have and promote such educational tools. Further, States could fund media classes for educating students on how to tell if an article or video is fake. Checking whether an article is real may be as simple as checking the URL of the webpage or clicking on a single source in the article.[35] After five days, the fake President Obama video had been viewed 3.7 million times, showing how easily viral videos can reach millions of people. Campaigns like this video can be broadened to show the possible disastrous consequences of spreading information without corroborating its validity.
Conclusion
In conclusion, blockchain may be a useful tool in the fight against fake news, but it is merely a tool. Users must be educated in the issue of fake news to wield this tool effectively. Short of ubiquitous education, blockchain can be used in conjunction with other initiatives to combat fake news. Tracking and penalizing advertisers who sponsor fake news rumour mills may help reduce fake news at its source. However, completely stopping fake news at its source is likely impossible, as is determining whether all news is true or false. An educational program should be initiated to instruct social media users from a young age on the dangers of fake information. This education program could be an international venture encouraging governments to work together and reduce the impact of fake news. The best tool against the effects of fake news is the well-informed and educated reader.
[1] YouTube, “You Won’t Believe What Obama Says In This Video! 😉” (April 17, 2018), online: < https://youtu.be/cQ54GDm1eL0 >.
[2] Ibid.
[3] “More data will be created in 2017 than the previous 5,000 years of humanity” (December 23, 2016), online: App Developer Magazine <https://appdevelopermagazine.com/4773/2016/12/23/more-data-will-be-created-in-2017-than-the-previous-5,000-years-of-humanity-/>.
[4]“’Fake News,’ Lies and Propaganda: How to Sort Fact from Fiction,” (March 30, 2018) online: University of Michigan Library Research Guides, <http://guides.lib.umich.edu/fakenews>.
[5] Caroline Kelly, “Cambridge Analytica whistleblower: Data could have come from more than 87 million users, be stored in Russia” (April 8, 2018), online: CNN Politics <https://www.cnn.com/2018/04/08/politics/cambridge-analytica-data-millions/index.html>.
[6] @ZackBornstein, “when you wanted a faster way to rank girls by looks and ended up installing a fascist government in the most powerful country on earth.” (April 10, 2018 at 12:18 pm) online: Twitter <https://twitter.com/ZackBornstein>.
[7] Adam Lusher, “Cambridge Analytica: Who are they, and did they really help Trump win the White House?” (March 21, 2018), online: Independent <https://www.independent.co.uk/news/uk/home-news/cambridge-analytica-alexander-nix-christopher-wylie-trump-brexit-election-who-data-white-house-a8267591.html>.
[8] William D. Wells, “Psychographics: A critical review”, online: (1975). Journal of Marketing Research, page 197 at para 2 <https://www.jstor.org/stable/3150443?seq=2#page_scan_tab_contents>.
[9] Nadja Bester, “Can Blockchain Save Us From Fake News?” (March 26, 2018), Online: Invest in Blockchain<https://www.investinblockchain.com/fake-news-blockchain-solution/>.
[10] “Yellow Journalism” online: Britannica <https://www.britannica.com/topic/yellow-journalism>.
[11] Amy Mitchell, Jeffrey Gotfried, Katerina Eva Matsa, “Facebook Top Source for Political News Among Millennials” (June 1, 2015), online: Pew Research Center <http://www.journalism.org/2015/06/01/facebook-top-source-for-political-news-among-millennials/>.
[12] “Filter Bubble” online: Techopedia <https://www.techopedia.com/definition/28556/filter-bubble>.
[13] Robert Chesney and Danielle Citron, “Deep Fakes: A Looming Crisis for National Security, Democracy and Privacy?” (February 21, 2018), online: Lawfare Blog < https://www.lawfareblog.com/deep-fakes-looming-crisis-national-security-democracy-and-privacy>.
[14] Seth Fiegerman “Facebook’s global fight against fake news” (May 9, 2017), Online: CNN tech <http://money.cnn.com/2017/05/09/technology/facebook-fake-news/index.html>.
[15] Adam Mosseri, “Working to Stop Misinformation and False News” (April 6, 2017), online: Facebook newsroom <https://newsroom.fb.com/news/2017/04/working-to-stop-misinformation-and-false-news/>.
[16] Florence Davey-Attlee and Isa Soares, “The Macedonia Story” (2017), online: CNN Money <http://money.cnn.com/interactive/media/the-macedonia-story/>.
[17] Bester, supra note 9.
[18] Toshendra Kuma Sharma, “How Data Immutability Works in Blockchain?” (September 5, 2017), online: Blockchain Council <https://www.blockchain-council.org/blockchain/data-immutability-works-blockchain/>.
[19] Ameer Rosic, “Proof of Work vs Proof of Stake: Basic Mining Guide” (2017) online: Blockgeeks <https://blockgeeks.com/guides/proof-of-work-vs-proof-of-stake/>.
[20] Robert Greenfield IV, “Reputation on the Blockchain” (November 20, 2017), online: Medium <https://medium.com/@robertgreenfieldiv/reputation-on-the-blockchain-624947b36897>.
[21] Steven Buchko, “The Rise of Fake News” (March 26, 2018), online: Coin Central <https://coincentral.com/blockchains-fight-against-fake-news/>.
[22] Damit Singh and Dondrey Taylor, “Decentralized News Network” (January 18), online: DNN <https://dnn.media/storage/DecentralizedNewsNetworkWhitePaperDraftv1.5.3.pdf> page 8.
[23] Ibid.
[24] Ibid at 28.
[25] Ibid at 37.
[26] Ibid at 40.
[27] Scott Alexander, “Nash Equilibria and Schelling Points” (June 28, 2012), online: Less Wrong <http://lesswrong.com/lw/dc7/nash_equilibria_and_schelling_points>.
[28] Greenfield IV, supra note 20.
[29] Scott Kleinberg, “When you share fake stuff, you mess up Facebook – so stop it!” (October 6, 2015), online: Chicago Tribune <http://www.chicagotribune.com/lifestyles/ct-social-media-fake-shares-20151006-column.html>.
[30] Caitlin Dewey, “6 in 10 of you will share this link without reading it, a new, depressing study says” (June 16, 2016), online: The Washington Post <https://www.washingtonpost.com/news/the-intersect/wp/2016/06/16/six-in-10-of-you-will-share-this-link-without-reading-it-according-to-a-new-and-depressing-study/?noredirect=on&utm_term=.af092aa155db>.
[31] Emily Stewart, “What the government could actually do about Facebook” (April 10, 2018), online Vox <https://www.vox.com/policy-and-politics/2018/4/10/17208322/facebook-mark-zuckerberg-congress-testimony-regulation>.
[32] Brian Barrett, “What Would Regulating Facebook Look Like?”
(March 21, 2018), online: Wired <https://www.wired.com/story/what-would-regulating-facebook-look-like/>.
[33] Davey-Attlee, supra note 16.
[34] Adam Mosseri, “A New Educational Tool Against Misinformation” (April 6, 2017), online Facebook newsroom <https://newsroom.fb.com/news/2017/04/a-new-educational-tool-against-misinformation/>.
[35] Ibid.