Facebook’s Vindication
New research suggests that social media isn’t causing polarization the way many social media doomers thought.
This is Brain Candy, a non-partisan newsletter about politics and elections. If you’re new here, sign up for free below or click here to learn more.
If you’re already subscribed, thanks for coming back!
For years, people have been claiming that Facebook (now Meta) is to blame for the recent rise in extremism and polarization in America. After Trump’s victory in 2016, progressives said that Facebook made it happen by allowing misinformation and extremism to flood the platform. Then people claimed that Facebook was responsible for the January 6 attack on the Capitol. And in 2021, President Joe Biden said that Facebook was “killing people” by allowing Covid misinformation to spread.
None of this was based on any actual research or evidence. Most of the people claiming that Meta was polarizing the country simply just thought it was self-evident. Is there any way that locking people into echo chambers and bombarding them with political content could not make the country more extreme and polarized?
Well, last week, four gold-standard studies trying to answer this question were published in two of the country’s most prestigious academic journals, Science and Nature. To conduct the studies, the researchers were given access by Meta to assess user behavior before and after the 2020 election. The goal was to understand how much Facebook’s platform and algorithm affect people’s political beliefs and whether or not it’s the cause of rising extremism and polarization. The resounding conclusion was that, despite all the noise about Meta’s toxic influence on American politics, the company is likely having a much smaller impact on political polarization and division than many had thought.
In one study, researchers tweaked the Facebook algorithm to reduce users’ exposure to like-minded sources. They found that this intervention “had no measurable effects on eight preregistered attitudinal measures such as affective polarization, ideological extremity, candidate evaluations and belief in false claims.”
In another, the researchers found that reducing the virality of political content “does not significantly affect political polarization or any measure of individual-level political attitudes.”
In a third, the researchers concluded that changing users' feeds from algorithmic to chronological “did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes.”
And in the last study, which was observational rather than experimental, the researchers found that “ideological segregation is high and that conservatives are typically more insulated and that they see more misinformation than liberals.”
The ultimate takeaway from all this is that Facebook does not seem to be causing people to become more extreme or polarized. Instead, it’s more likely that people just use Facebook as a tool to seek out content that they already agree with and would likely seek out otherwise.
To me, this makes perfect sense. The idea that serving people with political content would be enough to turn them into partisan zombies always seemed a bit far-fetched. People are not empty vessels whose brains are just waiting to be filled with whatever is presented to them. Such a view of human nature is wildly simplistic and immensely condescending. That’s why when people talk about Facebook radicalizing people, it’s always other people who are the ones getting radicalized. Nobody ever thinks that Facebook’s algorithm has made them personally more extreme or irrational. It’s always some other dummy whose brain is getting hijacked.
It follows that if these studies undermine many of the doomer’s claims about social media, they also undermine many of the doomer’s proposed solutions to fix social media. On the left, for example, many have suggested that the government could force Meta to abandon its recommendation algorithm and instead rely solely on a chronological newsfeed. That was the proposal put forward by the “whistleblower” Frances Haugen on the grounds that the algorithm is hurting society by promoting divisive and polarizing content. But if the recent research is right in suggesting that getting rid of the recommendation algorithm doesn’t actually make that much of a difference, the rationale for this proposal goes out the window.
Other proposals for reigning in the negative effects of social media platforms suffer from the same flaw. Populists on both the left and right have suggested removing the legal shield that protects websites from liability for user-generated content. Those on the left want to remove the protection so that platforms will need to be much more heavy-handed about their content moderation while those on the right want to make the protections contingent on treating all political content equally so as to eliminate what they see as political bias. But if the content that people see on social media has little to no bearing on their political attitudes, then the basis for such a dramatic government intervention is not warranted.
The response from the study’s critics has been split between some good-faith criticism and lots of bad-faith attempts to wave away or ignore the new research. The honest critics note that while the study is indeed high-quality and instructive, it’s also relatively narrow in scope. Because the study only looked at the months surrounding the 2020 election, they cannot necessarily be taken to apply to every period of time. Moreover, this is just one set of studies, and more scholarly inquiry is needed to affirm their findings before they can be taken as definitive.
The bad faith critics, on the other hand, are doing whatever they can to dismiss the new papers. Their response has mostly taken two forms: ignore the evidence outright or launch ad hominem attacks on the researchers. In their article on the new research, The Washinton Post quoted Nora Benavides, an activist who encourages companies to do more to police misinformation on their platforms, promptly dismissed the new research as one of Meta’s “schemes to dodge accountability” for election misinformation.
The comment section of that article features a number of readers similarly refusing to contend with the new research, choosing instead to deny lob unwarranted attacks on the integrity of the researchers. A small selection of the top comments:
The most compelling of these comments are the ones that compare the recent studies to research conducted on behalf of tobacco and oil companies in the mid-20th Century. But if you take a single second to compare those decades-old studies to the ones released last week, the comparison breaks down. In those older studies, the researchers were paid by giant oil and tobacco companies to conduct research with a conclusion already in mind. In this case, Meta did not pay the researchers, the researchers were allowed to come to their own independent conclusions, and the research was high-quality and rigorous enough to make it into two of the nation’s most prestigious science journals. Moreover, some of the researchers — including lead researcher Brendan Nyhan — are vocal progressives and the very last people you would expect to fudge numbers on Meta’s behalf.
What’s interesting about this absolute refusal to acknowledge scientific evidence is that it’s the kind of behavior typically associated with the MAGA crowd. This time, however, the people burying their heads in the sand to are the people who likely have signs in their front yard publicizing that they “believe the science.” It appears, however, that their willingness to believe science is contingent on science coming to conclusions that they already agree with.
In any case, there are ultimately three high-level takeaways from this recent batch of studies and the response. First, social media is likely not the cause of America’s polarization problem. Second, we should not let the most hysterical voices guide our policymaking. Third, we are all vulnerable to confirmation bias and logical inconsistency. If we could collectively learn these lessons, our country would be in a much better place both in terms of our tech policy and our political landscape more generally.
If you enjoyed this post, please share it with a friend and click the ❤️ button so more people can discover this newsletter. And if you’re not already subscribed, sign up for free by entering your email address below and hitting the “Subscribe” button.
“conservatives are typically more insulated and that they see more misinformation than liberals.”
One wonders - who decided what constitutes “misinformation”? To what extent did the researchers’ own bias inform this conclusion?