← Back to blog

Scaling Bias

May 12, 2016

Facebook got in trouble this week…sort of. If you were already sufficiently awed by the power Facebook has amassed then it may not have surprised you to learn that Facebook’s method of deciding what rises to the top of your newsfeed and what drops out of it involved a process that could be considered “biased,” perhaps even biased in a way that reflects the left-leaning humans behind the curtain.

Because Facebook is a platform that displays and curates news, with an audience that dwarfs that of any self-identified news outlet, we project our standards for journalistic integrity onto an entity that never gave us a good reason to hold it to those standards in the first place. The best reason I can come up with is “Facebook’s grip on discourse is so powerful that I can’t face the possibility that it would distort that discourse in any way.” Again, the outcry seems to come from an innocence we projected onto Facebook, not anything the company said about itself.

If bias is a problem, Facebook is certainly biased in more important domains than American politics, and our focus on that current narrow, concrete example shows the limits of our imagination and skepticism. Facebook’s ultimate bias, expressed throughout its fabric, is toward growth: increasing its total active users and increasing the number of hours that existing users spend on Facebook. If the obvious behavior of Facebook’s algorithms in service of that goal don’t alarm us as much as a liberal bias in its surfacing of political news (something we’ve seen time and time again) it’s only because we’ve so internalized the values of late capitalism that we lack any vocabulary for criticizing Facebook’s shameless harvesting of our attention.

Another of Facebook’s biases, for example, is its philosophical and practical preference for the subjective over the objective, which again serves the goal of user growth: Mark Zuckerberg would rather show you your own personal feed tailored to your past behavior (hewing cautiously to the familiar and unadventurous) than expose you to a more objective or expansive version of reality. I’m not making a statement here about which is better—that’s a longer essay—but it’s certainly a bias.

The most valuable lesson of this Facebook controversy has been the discussion of what an algorithm is, exactly, given the involvement of a human editorial team at Facebook. Like those human editors, algorithms are never objective. They’re the opposite, in fact: human tools for achieving desired results more consistently. An algorithm is an engine of bias more than an antidote, the codification and repetition at scale of an outcome that an individual or group wants to achieve. Consistently and reliably, algorithms achieve only the results that their creators intended and little else. If algorithms were less consistent and reliable they would be more random and therefore more objective, less tied to one group’s intentions. The manual involvement of an editorial staff in Facebook’s news ranking effort is no contradiction of that dynamic: Once algorithms can do tasks those editors are doing now, they will.

And reality, to the extent that it’s even separate from the internet anymore, does not always perform better. Benedict Evans points out, “If Google or Facebook have arbitrary and inscrutable algorithms, so do people’s impulses and memories, and their decisions as to how to spend their time.” Facebook’s black box performs no worse than its users, but because it’s a more controlled and regulated environment, it offers a lower probability of exposure to unexpected or counterbalancing forces, in politics or elsewhere—less possibility of correcting the algorithms we already embody. For any individual, Facebook is a monoculture that is always refining itself to be more how it already is, and refining us to be more how we already are, or how Facebook wants us to be.

The issue of bias, then, pales in comparison to two broader problems, one Facebook’s and the other ours. Facebook’s problem is that it wants to comprise as much of our experienced reality and waking life as possible, and it’s actually been adept at increasing its share of that reality. That’s a more ambitious goal than programming our political preferences, although it encompasses the latter.

Our problem is that we willingly accept the terms Facebook offers us, and have become increasingly engaged with the platform as a society but not critical enough of what its algorithmically feeds back to us. At best, Facebook will keep giving us what it thinks is best for us; at worst, it will give us what is best for Facebook. Most likely, we’ll keep getting a bland mixture of the two. The total hours we all spend immersed in Facebook’s mirror universe will continue to be a direct measure of Facebook’s success and of its grip on our collective minds.

If you think that sounds like a bad deal? Log off.