Digital Wastelands, Societies of Control

“The story of software eating the world is the also the story of networks eating geography,” writes Venkatesh Rao in Breaking Smart, extending Marc Andreesen’s well-known pronouncement. If I reduced the perennial themes of my own blog to one thesis statement, it could be a variant of that: the way infrastructural networks, often digital but not always, channel and intensify the traffic of human activity that would have to settle for less efficiency anywhere else (but would probably gain in humanity what it lost in speed).

I constantly compare the pervasive Social Network—Facebook, Twitter, and everything else—to physical urban space because certain dynamics operate so similarly in both environments. It should be obvious by now that the agora, the marketplace, and so many of the city’s classical functions have migrated partially or entirely to the information-saturated ether that language still hasn’t evolved nearly enough words to describe.

The interplay between the meatspace world we’re seemingly (but not really) always trying to leave behind, and the cyberspace where we’re ending up, is far from a one-to-one mapping. Deleuze explained this distinction in “Postscript on the Societies of Control” well before he could have observed its full realization, desribing what he recognized as our collective passage from societies of sovereignty to societies of control. The former is characterized by the enclosure, the physical space that literally contains its subjects—schools, prisons, factories, or even offices. Anyone inside the enclosure follows its rules, feels its presence, and escapes from it by physically leaving it. The traditional city is a kind of enclosure, albeit a porous one.

After the enclosure, in the world already apparent to Deleuze in 1992, we find “ultrarapid forms of free-floating control that replaced the old disciplines operating in the time frame of a closed system.” The average reader may not find Facebook quite this oppressive, but the contours of these new digital landscapes are correctly anticipated in Deleuze’s essay, particularly his observation that “in the disciplinary societies one was always starting again (from school to the barracks, from the barracks to the factory) while in the societies of control one is never finished with anything.” You can hear a more optimistic phrasing of this dynamic by techno-optimist Peter Thiel when he says that Facebook has “solved the identity problem.” Either way, you can no longer reinvent yourself by leaving town. Your profile travels with you, as does your credit score and a variety of other attributes by which your weak ties might evaluate you.

When we observe how humans behave in digital social space, however, the differences between physical public space and its digital counterpart become clear. The limitations that the physical world imposes, and the social norms that only emerge through face-to-face contact, yield a specific range of behavioral modes that diversify and run wild online, once freed from those constraints. When we compare networks to wastelands or failed states, this is what we’re talking about: Environments that fail to adequately punish certain antisocial actions or reward desirable ones – a tendency on most extreme display in any online article’s comments section.

On the social networks that offer the freest range of expression—Facebook, Twitter, and dating apps, again – three principal modes of behavior emerge: humans behaving like humans, humans behaving like machines (Hackers/Turks), and machines behaving like humans (Bots). A user’s impression of the quality of experience on a given network or platform usually derives from the mixture of these three behaviors that it encourages.

The first and third categories—humans being human, and machines being human—are the more straightforward. The former is the province of anyone who hasn’t fully submitted to the constraints of the digital milieu or consciously adapted to it—in other words, people who try to act online like they would offline. This is a reasonable approach, generally pursued by the majority of network users (normal people)—although there’s not exactly any structural reason why that should be the case.

The latter mode, of course, is the Bot—software that imitates human behavior with various degrees of success. I’ve discussed bots at greater length here. The nature of the digital, and the actions allowed or restricted, are what enable bots and humans to mimic each other with any success. Narrower restrictions on expression, like Twitter’s 140-character limit, create opportunities for bots to mimic humans more successfully, while looser restrictions, the ultimate example of which is real life in meatspace, foil bots or at least limit their spread.

The middle category is the most interesting: humans behaving like machines. The digital presents to us a simplified landscape that therefore rewards a simplified approach to its unambiguous contours. Built out of code—1s and 0s—a world like Facebook or OKCupid offers opportunities for hacking to achieve quantifiable goals, like maximum followers, maximum Likes, or maximum matches. Since each platform is optimized for certain things, these environments don’t present the difficult complexity that might overwhelm or confound similar strategies in the real world.

Of course, there’s a high-level and a low-level approach to behaving like a machine on the internet. We’ll call the high level that of the Hacker, the sophisticated machine imitator who exploits a quantified, rules-driven world to achieve goals that are harder to achieve in the messy physical world. Think of Tinder “hacks” like the guy who always opened his exchanges with the same line, or the Twitter user who follows known rules to get more followers.

The Turk is the low level imitator of machines – unable to navigate a more complex environment, he finds the limited domain of the network easier to use. He operates rather than creates. The Turk and Hacker both pursue the same end but for different reasons. The former wants to achieve specific objectives with greater reliability in a constrained domain; the latter wants to live in a world where his limited skills will get him farther. Both groups are finding ways to achieve better results than they would in meatspace.

The extent to which social networks read as failed states or wastelands are the extent to which they are dominated by humans acting like machines and machines acting like humans. The rest of us, who come to places like Facebook because we want another place to be human, will only thrive in these networks to the extent that they limit or restrict the Bot, the Turk and the Hacker from dominating that experience, and thus allow us to pretend that the digital is an extension of the real world rather than a simulation of it. The constant campaign that the network’s owners must wage to make this happen means that only the most carefully-tended gardens will sustainably attract activity that most of us could call human, while the rest of those networks will, at best, merely attract “users.”

One Comment on “Digital Wastelands, Societies of Control”

  1. […] dilution, while trolling and similar antisocial behaviors actually drive others from the network. Facebook, better at policing these, offers a more reliably positive experience, if less rich, intelligent, or free (insert your […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s