Features Insights

Social media and news: The dilemma of our time

“Our mission is to give people the power to build community and bring the world closer together,” says Facebook on its Community Standards page. Founded as a social network, Facebook and its executives seem to have no interest in influencing national politics. Yet their algorithm, designed for “seeing the world through the eyes of others,” could be problematic for journalism, for presidential elections, and even for democracy.

Though we are more connected to the world, are we better informed? While metrics determine the news that we see, do those stories deserve the attention they receive? And why does fake news seem unavoidable?

Fake news is not new. In 1835, the New York Sun published a hoax claiming that unicorns and two-legged beavers were found on the moon. The motivation was similar to what we see in the digital age: the newspapers of that era sought circulation and advertising revenue; the digital platforms value clicks, likes, and retweets. But many studies show that the nature of social media may amplify the impact of fake news.

Catalyzing fake news and manufacturing consensus

According to a Pew Research Center analysis, in 2017 two-thirds of U.S. adults got news from social media. With more than 2 billion monthly active users, Facebook leads every other social media site as a source of news. Although Twitter is not nearly as large, its 330 million users represent a 15 percent increase over the previous year, perhaps reflecting interest in President Trump’s use of the platform. The large user base of both platforms can be used to generate all kinds of personal information for targeted advertising. Dr. Ella McPherson and Dr. Anne Alexander, media scholars at the University of Cambridge, note that the social media profiles and online behavior of individual users could reflect their gender, social status, nationality and personal tastes. Tech companies routinely gather these data and sell them to advertisers.

Yet a recent study suggested that it is the precision with which the purveyors target an audience that becomes the most important catalyst for fake news. By running computer simulations that modeled the way fake news travels online, three network theorists — Christoph Aymanns, Jakob Foerster, and Co-Pierre Georg — found that the key was to seed an initial cluster of believers who would share or comment on the item, recommending it to others through Twitter or Facebook. This explains how propaganda could be targeted to specific groups, thus influencing the outcome of an election.

While Facebook and Twitter have both been infected with manipulated, targeted content, they have encountered slightly different problems. Facebook is much bigger and reaches far more ordinary people. Algorithms control the newsfeed and make automated decisions about what users want to see and what they don’t care about. With the emergence of Russian political propaganda, ads promoting hate groups, and fake news, Facebook’s overwhelming power has come under increased scrutiny.

Twitter is less reliant on algorithms, and though its audience is smaller than Facebook’s, it is diverse and influential. “It is where journalists pick up stories, meet sources, promote their work, criticize competitors’ work and workshop takes,” wrote Farhad Manjoo, the technology columnist for The New York Times. “In a more subtle way, Twitter has become a place where many journalists unconsciously build and gut-check a world view — where they develop a sense of what’s important and merits coverage, and what doesn’t.” Manjoo noted that Twitter’s Trending Topics list is often used as an assignment sheet for the rest of the internet.

Unlike Facebook, Twitter offers users anonymity, which opens up the platform to false identities. Manjoo pointed out that cheap, easy-to-use online tools let people quickly create thousands of Twitter bots — “accounts that look real, but are controlled by a puppet master.” Bots speed up the process of discovery and dissemination of particular stories, turning an unknown hashtag into the next big thing. He argued that a trending hashtag creates a trap for journalists who cover the internet: Even if they cover a conspiracy theory only to debunk it, they’re most likely playing into what the propagandists want.

Manjoo added that Samuel Woolley, the director for research at Oxford University’s Computational Propaganda Project, said that bots are doing something he calls “manufacturing consensus,” or building the illusion of popularity for a candidate or a particular idea.

Due to the sheer volume of information, finding the most up-to-date, newsworthy story would be challenging for users, especially journalists. Social media platforms provide metrics as the user guidebook, prioritizing popularity and consensus, so that the most popular stories are getting more attention. Yet in the print era, front-page stories were chosen by editors using their experience and intuition. The news criteria were not only about audience attention but also the stories that deserved the most attention. The journalist, at one time, was also the educator.

The irony is that though the digital age has given us diversity and infinite variety, metrics narrow the range of what we see. Shouldn’t the audience ask for something more?

Tech companies or media platforms?

Fake news has also gotten a boost from the decline of traditional news organizations. “A mechanism that held fake news in check for nearly two centuries — the bundle of stories from an organization with a reputation to protect — no longer works,” said Tom Standage, the deputy editor of The Economist. When algorithms replaces humans and social media supplants the judgment of editors, the question becomes: Can facts be verified without traditional journalistic methods?

DON’T MISS  How the Associated Press is experimenting with headlines and modular stories to win Facebook

Tech companies are almost certainly more willing than the news media to skip the verification process. After all, their business model is to prioritize profits over journalistic integrity. Social media platforms leave it to technology in designing and providing the most engaging content for their users, arguing that technology is equivalent to objectivity, and objectivity is the best possible option. What this ignores is that technology itself is subject to manipulation. Flawed algorithms lead, in turn, to flawed journalism — or, rather, to flaws in the automated decisions that control what news users see and don’t see.We may be discovering that human expertise, the research and interviews conducted by journalists, is crucial to society as well as some shared norms and values.

McPherson and Alexander pointed out that the fragmented nature of social media leads to omission and falsification, as they have difficulty in establishing source, place, and time of production. (This is very different, for instance, from being able to see that a particular news article was published by The Washington Post on a certain date.) The audiences are lacking cues and context when absorbing online information. According to the Society of Professional Journalists’ Code of Ethics, journalists should take responsibility for the accuracy of their work, “verify information before releasing it,” and “never deliberately distort facts or context.” Algorithms can’t do that.

“This is why the machines, no matter how smart, are never going to be as sophisticated as a human,” said Tom Lin, a law professor at Temple University, in commenting on the explosion of fake news. “The bots cannot discern humor or nuance. They have no real context. They are just going to execute it on whatever they see.”

Government regulation, always a questionable proposition when dealing with speech protected by the First Amendment, is not a likely solution to the fake news problem. The Federal Communications Commission regulates online media differently from broadcast. While television and radio stations are required to disclose the sources of campaign ads that run on their networks to the FCC, the same rules don’t apply to digital ads. In addition, under the 1998 Communications Decency Act, online services aren’t responsible for content posted by their users, even if it’s illegal: If a Facebook user posts something defamatory, the injured person can sue the user but not Facebook. According to Matt Oczkowski, who ran Trump’s data operation through the vendor Cambridge Analytica, there are many such loopholes in the system because the law hasn’t caught up to digital platforms.

Even though Mark Zuckerberg has said that he regretted dismissing concerns about Facebook’s role in influencing the 2016 presidential election, he and other company executives have continued to emphasize that Facebook is a neutral platform. Added Sheryl Sandberg, Facebook’s chief operating officer: “We definitely don’t want to be the arbiter of the truth. We don’t think that’s appropriate for us.” Similarly, Colin Crowell, vice president of policy at Twitter, emphasized that Twitter users themselves —“journalists, experts, and engaged citizens” — correct public discourse every day in real time.

“Can facts be verified without traditional journalistic methods?”

Yet the technology platforms that distribute so much of our journalism have become part of the media landscape whether they like it or not. Natasha Lamb, director of equity research at Arjuna Capital, said that Facebook and Google both prefer to see themselves as neutral, but “they have been transformed into media platforms.” She is concerned that fake news and hate speech may cause a loss in user trust and ultimately present a real risk for the companies. False content is also having a negative impact on “democracy and having an informed electorate,” said Lamb. In addition, social media has become the weapon of choice for extremist groups. Because of the large amount of personal information online, ISIS recruiters and other extremists now personalize their message for individuals, raising all kinds of security concerns.

The consequence of trusting the magic of the platform itself is that fake news undermines the power of thoughtful reporting from reliable sources. It could also be used as an excuse to attack major news organizations, as President Trump has with CNN, the “failing” New York Times, and other media outlets. In such an environment, facts become overwhelmed by the sheer volume of information, making it difficult for the public to sort out what’s true and what’s false. “It is no longer speech itself that is scarce, but the attention of its listeners,” wrote Tim Wu, a professor at Columbia Law School.

In his essay, Wu re-examines the First Amendment in the context of the 21st century and how the logic of censorship has changed during the internet era. He argues that the intention of the First Amendment was to prevent the state suppression of dissidents. In the early 20th century, the jurisprudence “presupposes an information-poor world, and it focuses exclusively on the protection of speakers from government.” But today, the supply of speakers is endless, due to the massive decline in barriers to publishing and the low cost of speaking. Cheap speech, Wu writes, may be used to “attack, harass, and silence as much as it is used to illuminate or debate.” Therefore, he argues, “the use of speech as a tool to suppress speech is, by its nature, something very challenging for the First Amendment.”

“The irony is that though the digital age has given us diversity and infinite variety, metrics narrow the range of what we see. Shouldn’t the audience ask for something more?”

Moreover, while tech companies often cite the First Amendment as a guarantee to protect free speech, that doesn’t apply to their global audience. The United States has relatively relaxed standards of hate speech, but Europe has more strict regulations. In 2016, European officials pushed Facebook, Twitter, YouTube, and other platforms to remove content they deemed to be hate speech, which is constitutionally protected in the United States. The report was filed by European Commission, joined by nine member states, including Germany, France and the United Kingdom.

DON’T MISS  How Jenifer McKim reported on child abuse and neglect deaths in Massachusetts

How can we fix it?

Joseph Pulitzer once said, “the more successful a newspaper is commercially, the better for its moral side. The more prosperous it is, the more independent it can afford to be.” In the late 19th century, Pulitzer’s New York World made use of sensationalism to bring about reforms in the political system, in education, and in public health. Yet those tools could also be used for evil, and it is the responsibility of media platforms to mitigate the negative influences of business and technology.

One solution for fake news would be for tech companies to take responsibility for the content distributed on their platforms, just as the major news organizations do. From a long-term perspective, fake news and hate speech will ultimately cause distrust among users and hurt the reputation of the platforms, particularly in a more polarized and partisan society. According to research on the political environment on social media, more than one-third of social media users says they are worn out by the volume of political content they encounter, and more than half describe their online interactions with those they disagree with politically as stressful and frustrating. Social media was envisioned as a way to bring the world closer, but it often leads to tension, misunderstanding, and alienation.

But it would be very challenging to censor inappropriate content. There is “a fine line between abuse and free speech, and between false and sensational content,” said Elizabeth Dwoskin, correspondent for The Washington Post.

Major tech companies have introduced measures to tackle misinformation. Google and Facebook both removed some fake news sites from their advertising networks, cutting off the source of income that motivated some purveyors of fake news. Facebook has partnered with fact-checkers such as Snopes and PolitiFact so that users can flag any news they suspect is fake. If it is declared false by the fact-checkers, it is tagged as “disputed” and its ranking in the newsfeed is lowered.

In addition, tech companies have started to realize the limitations of the algorithm. Google has employed 10,000 evaluators to flag “offensive or upsetting” content. In May 2017, Facebook said that it planned to hire 3,000 more moderators to take down inappropriate and extreme content. Twitter set up a spam-detection team that looks out for bot-based manipulation, and is improving its tools to spot and shut down bots. Facebook also launched its Journalism Project in January 2017, working with news publishers and making it easier for them to charge for subscriptions. Snapchat will separate pictures and messages posted by friends from professionally produced content from publishers, celebrities, and other “influencers.” The redesign is considered a new effort to challenge Facebook and its algorithmic feed.

Facebook is also considering a newsfeed where publishers would be deprioritized. The platform has been running tests in six countries and launched a separate page called Explore, trying to differentiate private and public content. Although Facebook said that they have no plans to “roll out the test further,” journalism organizations are nervous about investing time and money into the platform.

Yet these measures seem not enough to stop the spread of fake news, hate speech, and propaganda. The New York Times interviewed a number of experts to offer recommendations for Facebook. Kevin Kelly, co-founder of Wired magazine, suggested that Facebook could offer an optional filter that would prevent any post from an unverified account from appearing in the newsfeed. Eli Pariser, chief executive of Upworthy, recommended that Facebook could focus on the value of content instead of measuring clicks and likes; the company could survey the users on what content provides the most or the least value. Tim Wu advanced an unlikely but attractive notion: “Facebook should become a public benefit corporation. These companies must aim to do something that would aid the public.”

One thing is certain: It’s time for tech companies to shift their dependence on technology and algorithms and reflect on human interactions and communications. As engaged citizens of a global community, executives of these companies should strive to build a better world — something that, in the long run, will benefit them as well as society.

 

Cover image: +Simple via Unsplash.

Huimin Li

Leave a Reply

Your email address will not be published. Required fields are marked *

Get the latest from Storybench

Keep up with tutorials, behind-the-scenes interviews and more.