Sign up here for the weekly email newsletter

Cars, Clothes & Battlesuits

Cars belong near the top of a long list of reasons why America is the way it is, but one American quality I’ve never heard attributed to cars is our increasing casualness of dress, which seems to have mirrored our impulse to drive during the past century. There’s no obvious connection between the two phenomena, yet whenever I leave New York for a more American part of America I remember that the city where we drive the least is also our least casual and I wonder if cars are somehow the cause.

Sweatpants, flip flops, sneakers, t-shirts, and baseball hats have pervaded nearly every realm of life except for weddings and funerals, slowly conquering former bastions of formality like the workplace. Technology has been a factor: Watches are mere jewelry now thanks to the digital clocks that accompany us in our phones, but more broadly than that specific side effect, all technology, in a sense, clothes us, augmenting our natural faculties and our bodies. Clothing itself is a technology—Marshall McLuhan called it an extension of our skin that stores and channels energy, an increasingly tactile shell that (especially in America) overthrew the more visually-oriented attire of aristocracy.

For McLuhan, clothing and housing are two different layers of our technological exoskeletons. The city is yet another layer. If the human body moves through the world encased within a protective stack including these components, surely the car has a place in that stack as well, somewhere between the clothing and city layers. Furthermore, we should observe significant cultural differences where any of the layers is minimized or absent altogether, with adjacent layers intensifying to compensate. This is my theory of clothing’s amplified role in New York, where the car layer is anemic compared to elsewhere. When I visited my hometown of Indianapolis over Thanksgiving I didn’t bring a coat with me, knowing that I’d spend all my time in my house or in a car, save for short walks across parking lots. “My car is my coat” was a dumb joke I made. I found myself wondering why anyone who has a car would bring their coat on most errands, or even own an expensive one for daily life.

walking city archigram.png

   Archigram’s “Walking City” (source)

The more layers that encase us, the less is demanded of our bodies themselves. To follow this dynamic to its logical conclusion is to end up with inert humans hibernating in the fluid-filled pods of The Matrix, naked and fully immersed in an advanced technological stack, wrapped in the multiple layers of protection it offers. Wearing sweatpants and spending whole days surfing the internet is not entirely different from that extreme scenario, while traditional urban fabric seems anachronistic by comparison: Walking outdoors on the streets of dense cities, we’re vulnerable and suboptimized, clad in boots and coats rather than temperature-controlled pods of the automotive or Matrix variety.

If cities are an outer protective layer in this ecosystem, on the other hand, maybe we’re not so vulnerable after all—see Matt Jones’ 2009 blog post “The City is a Battlesuit for Surviving the Future,” arguing that the outer urban shell is the most important layer of all. The post’s title refers to Jack Hawksmoor, the protagonist of Warren Ellis’s comic book The Authority, who wraps himelf in the city of Tokyo to fight a sentient, time-traveling version of 73rd-century Cleveland. Jones observes that we increasingly wrap ourselves in the city as a defense against all the forces of nature that have assailed humanity throughout history, and in the networked present and future, this can become more true than ever. Jones writes, “The city of the future increases its role as an actor in our lives.” A stronger outer shell, the city, might then enable a more humane life within, while a compromised city layer shifts its functions onto the house and car layers, dividing its inhabitants into more atomized enclaves.

The networked city Jones describes is a different animal than its forebears, and is somehow more tactile than even McLuhan anticipated. We now inhabit the meatspace city, whose previous functions of information exchange have increasingly migrated to digital channels (which are, of course, embedded in its physical fabric). The features of traditional urbanism most likely to intensify under this new regime, to the extent that it continues spreading, are the most difficult to encode: eating, drinking, shopping for specialized merchandise, and the more precious types of human interaction. The meatspace city is a construction of affluence and is far from ubiquitous—it might never be—but even it presents a more comfortable and convivial interior than the car in the suburban wilderness. We’ve lost some of the optimism about networked urbanism that we enjoyed in 2009 when Jones wrote his battlesuit piece, but many of the reasons for that loss are pervasive and only most visible in cities, which are still better armor than most of the substitutes we’ve tried.

Narrative Flash Crashes

As a civilization, we entertain plenty of myths about the way we never were. One of the most attractive is that we were born into a world unmediated by technology, a pure state of nature, before we gradually enhanced or corrupted that world with our inventions over the millennia that followed. From the simplest tools onward, of course, we’ve never been without technology: A lightbulb, a lit match, and a painting on the wall of a cave are all “technology,” extending our bodies, changing our brains, and remaking the environments we inhabit.

If we never had a clear, pure image of the world around us, that image was at least relatively stable even recently, if still distorted, shaped by all of the technologies that Marshall McLuhan called media: books, newspapers, radio, television, music, religious ceremony, language itself. In our hypothetical state of nature we may have apprehended what mattered directly—an approaching storm or a source of food—but that perception has since been wrapped in increasingly complex layers of language, imagery, and context—narrative—most of which was created by other people and transmitted to us, usually with the goal of influencing us somehow. We are narrative-loving animals, and we’re without narratives less often than we’re without clothing.

Like clothing, we’re also much more comfortable with narratives than without them (at least outside the house). We are terrible at apprehending reality directly so we constantly grasp for some simplifying context anywhere the possibility arises. Narratives are thus immensely powerful: Everyone needs them, and the people who create them program minds, to various degrees. Narratives are also extremely dangerous: Every destructive mass movement in history—Nazism, Stalinism, Maoism—has fueled itself with them.

Fortunately for all of us, narratives are difficult to wield individually. First, there’s a huge capital requirement: Every narrative requires a distribution channel. Do you own a media conglomerate or have a million Twitter followers? Second, and more importantly, our environments are full of information, much of which doesn’t fit a given narrative and some of which may even contradict it outright. If you read that all dogs are brown in the morning paper and see a white dog as soon as you leave the house, that’s a problem for that narrative. Good thing your environment doesn’t respond to your morning news, or even worse, collude with it.

But what if it did? What if the newspaper and the street you stepped out of your front door into were facets of the same underlying system, a system that generated its output in both domains using the same logic and rules? In this scenario, the propaganda you just finished reading in the paper populates your surroundings as soon as you finish. Now all dogs are in fact brown.

This depraved Matrix is our emergent reality and Facebook is its foundation. Facebook’s long-existent fake news problem has risen to urgent prominence since the presidential election for its apparent role in stoking pro-Trump sentiment. If Facebook were just a “news website,” a digital version of some mid-century media form, there would be no problem: It would have lost credibility as a legitimate source of truth and repelled the users who recognized the unreliable nature of its content. But Facebook is an entire environment, a platform where many people live surprisingly huge portions of their lives, a public space as well as a newspaper, radio station, message board, and plenty else, condensing the multiplicative power of all of those media into its firehose blast (and in this broad sense Facebook is correct to argue that it’s not a media company). Ben Thompson characterized Facebook’s power as follows: “The use of mobile devices occupies all of the available time around intent. It is only when we’re doing something specific that we aren’t using our phones, and the empty spaces of our lives are far greater than anyone imagined.” By bundling together as many things as possible that the internet can offer, Facebook has ensured that we’ll come for something we care about and stay for the incidental bullshit, for hours on end.


  A contemporary human habitat (source)

Again, Facebook is an environment in every sense but the physical, so its relationship to narrative truth and falsehood is more complex and powerful than anything fitting within our narrower definition of “media.” We immerse ourselves in Facebook to a fault, check in with it throughout the day, and even fall asleep with it. To compare it to an agora or neighborhood street is to underrate it, because it’s so much more of a public space than those physical places usually are today. Even worse, at least for the goal of curtailing fake news, Facebook responds to us, giving us tools to amplify the content we find emotionally resonant. Because we are narrative animals, this tends to be content that aligns with the existing narratives our brains are primed to accept. We like and share first and verify later, if ever. Rich narratives get richer and poor narratives get poorer, withering away unnoticed. That narrative snowball effect parallels the financial transformations of recent decades, also enabled by unrestrained digital technology, and the 2016 election was its trillion-dollar flash crash.

The fundamental problem with Facebook, then, is that narratives of all kinds get traction more quickly in digital terrain. The physical world is simply hard for individuals to control, especially where information and culture are concerned, but the digital is optimized for focusing and channeling and manipulating information—by definition. We’re not used to the power that can be exercised over narratives in these new spaces. The merging of media content and environment is another phase of what McLuhan called the retribalization of mankind, and we’re discovering that the negative connotations of that euphemism are as true as the positive.

Many have suggested solutions to Facebook’s fake news problem in the past week: internal changes to company policy or externally-imposed rules. It’s possible these solutions would enmesh us in additional layers of algorithmic opacity, with new imperfect algorithms deployed to fix the flaws wrought by today’s imperfect algorithms. Andy Warhol defined art as whatever you can get away with, and any platform enabling its users to create and share their own content will be rife with experiments in what they can get away with. Fake news is likely a feature of these platforms rather than a bug. McLuhan’s most famous dictum was that the medium is the message; perhaps the blurring of fact and fiction is inherent to Facebook, and we can best negotiate that by actively reducing the platform’s grip on our personal and public lives.

Bored by Randomness

The PlayStation game No Man’s Sky promised a revolution for its medium before its release two months ago, getting attention from gamers and non-gamers alike for its “procedurally-generated universe” in which a single 64-bit seed number could randomly create 18 quintillion distinct planets via algorithmic logic, each replete with its own weird flora and fauna. The space explorers playing the game would effectively create each planet upon discovery: Arriving somewhere undisovered would spur the procedural generation of a random, new, and hopefully fascinating world. It was going to be a major step toward humans getting out of computers’ way in yet another domain, after giving the machines sufficient instructions to make 18 quintillion of something that other people would actually want.

Not surprisingly, No Man’s Sky was boring. Its beautiful graphics couldn’t overcome the fact that, on one planet randomly selected from the infinite possibilities in the procedurally generated universe, nothing was happening. The variety among these planets was shallow and nominal, the 99.999% virgin territory untouched by any hand that could form it into something interesting. As one reviewer wrote, “There are no grand civilizations sequestered somewhere in this galaxy with Turing Test-passable aliens waiting to wow us with riveting conversation.” The procedural generation process, additionally, means the only parts of the universe that exist are the ones you see—a solipsistic vision of reality that is, again, boring.

Maybe some of the No Man’s Sky planets did end up with compelling advanced civilizations and weird creatures, but they’re too hard to find. Every baseball season has roughly 24,000 games and if you watch a random handful of these you’ll find them boring as well (unless you go to the games and sit in the sun and eat hot dogs and do everything but watch). Each baseball game is somewhere on a bell curve of expected outcomes so a single randomly chosen game probably won’t yield a no-hitter or two grand slams by one player in the same inning.

The randomness of baseball is more interesting than that of No Man’s Sky though, because it’s wrapped in the context of an existing culture and infused with meaning from that culture. There are also other ways to make baseball (slightly) more interesting for yourself: Become a fan of one team whose 162 games will excite you more than the other 2,268. The 70 to 90 games that your team wins will especially excite you, and the ones with lots of home runs even more so. Or you could adopt a different strategy and only watch the postseason, in which every game matters and is inherently nonrandom.


   What a procedurally generated planet looks like

What a random universe really needs is editing, in other words. 18 quintillion of something is great for advertising copy but terrible for experience. Some work by other people is still required to make a massive procedurally-generated universe interesting, to put some meaning into it—a map or search mechanism to guide players toward the best parts; a cultural context that imposes meaning on the existing randomness; or a few planets created by human hands. Most games that painstakingly create one world are better than this game with its quintillions of mass-produced worlds.

The 55 snapshots of imaginary cities in Italo Calvino’s 1972 novel Invisible Cities are the opposite of the ennui in No Man’s Sky. Each fantastical city that Calvino’s Marco Polo recalls from his world travels, while also invented (and generated by Calvino’s “rules”), is rich with the meaning and magic that No Man’s Sky promised but couldn’t produce. Calvino’s own work—the brilliant imagination that enabled him to craft each city’s description as well as the editing that removed the meaningless noise in between those vignettes—is why his 55 worlds will outlive the 18 quintillion in No Man’s Sky. 55, it turns out, is plenty.

Five Things

Here’s an assortment of the best things I’ve read on the internet lately, none of which were published in the past week:

Walking to the mall: Ten years ago Tom Chiarella published this essay in Esquire about, well, walking to the mall—in Indianapolis, where walking to the mall makes even less sense than elsewhere. He imbues the journey with an epic quality, stretching it out in time, without forgetting the absurdity of what he’s doing or the nondescriptness of shoulders and embankments meant to be seen at 40+ miles per hour, if at all. In short: You’re not supposed to walk to the mall. A perfect description of the suburban carscape and a nonfiction companion to Ballard’s Concrete Island.

Wardriving: Some family in rural Kansas is being terrorized by strangers because their farm occupies the default geographic coordinates for IP addresses with unknown locations (more of what I described as digital NIMBYism). Companies like MaxMind apparently compile computers’ locations in online databases for sale to advertisers. One of the techniques for collecting that data caught my attention: wardriving, or sending cars driving around to physically collect IP addresses from open wi-fi networks. I keep picturing armadas of vehicles roaming small towns in middle America like the darknet version of Google’s Street View cars. In case you were wondering, the term wardriving is a reference to the “wardialing” done by Matthew Broderick’s character in the movie WarGames.

The Nostalgic Comfort of Normcore Dining: In the search-don’t-sort augmented reality that Google/Yelp/Foursquare ushered in, being ordinary is the only way to hide. I didn’t understand what normcore was until I read this.


   Concrete island (source)

Our Brand Could be Your Crisis: One of the best pieces I’ve come across in a while. I still haven’t seen the Zac Efron movie, We Are Your Friends, that Ayesha Siddiqi reviews here (I’m going to!) but she accomplishes the impossible, writing a thoughtful—brilliant, really—essay about that dreaded topic, the millennial. Required reading for anyone wanting a better grip on the current zeitgeist, the one you and I are too old to understand, just like Snapchat. Siddiqi also sees in the film the cultural evidence of our slow, ongoing economic collapse, which manifests itself in such subtle ways (what Bret Easton Ellis calls post-empire). “We can invent an app, start a blog, sell things online” could become a mantra for all of us. I’ve been looking for a way to build a longer post off of ideas embedded in this essay, but until then I’m stashing it here. Read it. 

Walmart: Last month Bloomberg ran this darker companion to the above essay. This is a truly dystopian look at how much crime happens on Walmart’s properties and the problems that crime creates for the local police forces that have to deal with it all, not to mention the cities from which the megastores have carved out a big privatized chunk. Corporate commercial space is not public space. Enjoy your weekend!

Occupy Twitter

It’s increasingly obvious that Twitter as we know it will stop existing before long. Maybe Facebook will buy and dismantle it; maybe it will successfully turn itself into the profitable ad-friendly platform that all of its users dread (it won’t); or maybe it will just disappear, bleeding away its remaining users as it’s already been doing until there’s nothing left but bots and clueless self-promoters and hateful egg avatars with ten followers each.

Twitter has already embraced the algorithmic feed, which is as shitty as expected, and it will further relax the 140-character tweet limit next week. Having shed its two definitive features, Twitter will become a worse Facebook timeline, recognizable only by its inability to curtail trolls and harassment.


  The traditional shrinking city (source)

I wrote in February that Twitter was a shrinking city but now it’s a city in full collapse. The parallels abound: a growing presence of unchecked dysfunction; an exodus of permanent citizens along with their economic contributions; the creeping presence of opportunists who hope to buy up its valuable parts and trash the rest; the sense that it was a better place back in the day.

One way or another, you (if you still user Twitter) and I will probably have to leave Twitter eventually. This is a true tragedy—many of us only talk to one another on Twitter and could never have formed certain communities without it. Like every collapsing community, Twitter is sure to further debase itself before finally forcing us all out, ensuring a messy exodus.

We should all keep in touch. Let’s decide now where we’re meeting up after Twitter dies. I suggest we meet in Zuccotti Park. If we’re lucky @dril will show up.

We should meet in Zuccotti Park because the internet isn’t the free outlet or the escape from physical constraints that it once was. Occupy Wall Street celebrates its fifth anniversary this week, and five years is a long time. In 2011, Twitter was cool—cool enough that it could function as a support system for a movement like Occupy. Now, Twitter is dying because it can’t survive in an ecosystem that requires it to grow profitably, and the internet is no longer a mainstream outlet for overprogrammed, corporate urban space, but more and more a mirror of that space, which forces out the weird and the unmonetized.

Now, more than five years ago, a place like the Zuccotti Park of Occupy Wall Street feels like a haven from the internet’s panopticon, maybe still a place to make a noise, but not a noise that the internet would reliably amplify. If Twitter continues its decline, there will be few digital spaces left that do what it did in its prime, but maybe physical space can again.

Invisible Maps, Beautiful Numbers

“Where we’re going, we don’t need roads,” Doc Brown announces at the end of Back to the Future. Revisiting that line today, roads are a sure thing in the foreseeable future—it’s more likely that where we’re going roads won’t need us.

For maps more than roads, the future is uncertain. Maps are as important as ever but somehow vanishing from their familiar haunts and reappearing everywhere else. A little more than a decade ago, a map was something you found printed on paper that helped you fumble toward a new destination, an item you packed for a road trip. Now maps show up anywhere, used in daily life passively and actively, guiding us through the familiar as well as the unknown, appearing in NY Times blogs, on TV screens in the middle of transatlantic flights, and in almost every video game. Throughout the spectrum between pure entertainment and pure utility there are maps everywhere.

Maps got more important after iPhones ensured we’d have them at our fingertips all the time. Tools began emerging that used maps in more sophisticated ways—searching for what’s nearby (Yelp, Tinder), tagging locations (Facebook, Instagram), or augmenting geographical reality (Pokémon Go). Driving, of course, expanded the everyday need for maps decades before GPS devices could narrate directions in real time, if not eventually drive the cars themselves.

In most of the examples just listed, there’s no fundamental need to read a map or even see one. The map just works in the background, another invisible algorithm that frames reality. Navigational maps are going “under the interface,” as I’ve written before. Nicholas Carr observes, “It would seem to be the golden age of maps and map-reading. And yet, even as the map is becoming omnipresent, the map is fading in importance.”

I wouldn’t put it that way, but we should decide what we mean by “map.” To Carr, a map is visual—a geographical diagram that you look at as you work out how to get where you’re going. But if a map is defined more broadly, a kind of logic, a protocol for navigation, then maps are certainly not losing importance—they are just becoming invisible as they disseminate everywhere (and invisibility is the destiny of so much advanced technology in the digital age).

Carr describes how the look of Google maps has evolved over time to show less detail and less text (“as a cartographic tool, Google Maps has gone to hell”) while becoming more aesthetically pleasing. This shift may be Google’s effort to optimize its maps for the smaller screens of smartphones, but that doesn’t quite explain it.

The real reason, Carr suggests, is that pictorial maps themselves don’t need to do much—when it’s time to actually navigate, the user enters a destination and gets an optimal route with turn-by-turn directions on a separate screen: “As a navigation aid, the map is becoming a vestigial organ. So why not get rid of the useful details and start to think of the map as merely a picture or an image, or a canvas for advertisements?”

Maps are being unbundled—split into their functional and aesthetic components. Is reading a map something humans are even meant to do?

Recently, after having to memorize a number for a reason I already forget, the following thought popped into my head: Whenever a person is dealing directly with a number, that’s a task that a computer will eventually do.

The modern world has been saturated with numbers long enough to make them feel organic, but humans and numbers mix like oil and water. We are always trying to turn numbers into narratives because we hate numbers and love narratives, and we have animal brains that deal better in generalities than precise calculations. Thus, we buy lottery tickets and feel that flying is more dangerous than driving. Even memorizing numbers is hard for us, but we used to carry countless seven- or ten-digit phone numbers around in our heads because we had no choice.


          Charles Demuth; the future of numbers (source)

By any rational standard, people cannot be trusted with numbers. We got much better at using numbers to our advantage once computers came around to do all the hard work for us. By now, we don’t have to remember many numbers (or many other things), much less do anything with those numbers. We’ve even outsourced remembering birthdays to Facebook. Increasingly we can embrace our natural narrative inclination. The world has become more data-driven because there’s more data being captured, but people are not becoming more data-driven, the tools we operate are.

As numbers go under the interface, like maps, they’ve disappeared from their usual places in the visual landscape. You see a phone number once—when you add it to your address book—and then it becomes a name, forever mapped to a person you know. Online banking eliminated the arithmetic of balancing one’s checkbook. ID numbers and passwords get saved in digital notes or email drafts and copy/pasted (a practice certain to be replaced by a more elegant solution). The list goes on and on. Many still deal with numbers in more sophisticated ways at their jobs, but such work stands at the frontier of what will be eaten by software eventually.

The reason for computers’ widespread adoption is not that they further immerse us all in the world of numbers, the machines’ native language—it’s that they help us escape from numbers and go back to what we do best, which is almost everything else.

In the 1960s, Marshall McLuhan wrote that “the horse has lost its role in transportation but has made a strong comeback in entertainment.” Numbers and maps are undergoing a similar transition now. Both have the same future in the human-readable landscape: aesthetic symbolism. The numbers that matter outside of software aren’t for memorization, addition, or multiplication, but cultural signification: infographics, athletes’ jersey numbers, famous addresses (1600 Pennsylvania Avenue), the numbers in social media username handles. Maps are better than ever as data visualization tools, but a map seen by a human is the last stop for that data, its sublimation into the realm of the irrational.

Steve Jobs understood this future of digital information and created its look. Apple’s sleek devices and operating systems became the dominant aesthetic of the digital age and did for numbers what Google and others would, in a different way, do for maps, hiding them beneath smooth aluminum surfaces, uncluttered interfaces, and rectangular icons with rounded corners. When we see maps or numbers now, we expect them to look good. Numbers still appear on iPhones where they absolutely must but these exceptions prove the rule: The red app notification badge icons communicate most of their information through color and shape, not the digit in the middle of each circle.

With fewer maps and fewer numbers to process ourselves, we glimpse a surprising future of algorithmic premodernism. McLuhan said that television, radio, and phones were “retribalizing” mankind by circumventing the role of the printed word and returning us to the mental and social patterns of primitive oral cultures. Apple and Google, in their own way, are completing that retribalization process, freeing us from a few more bulwarks of this rationally-biased era and synthesizing the machine age with ancient tendencies our brains still haven’t outgrown. If maps still look good to us after that synthesis, we’ll decorate the walls with them.

Blackouts & Balloons

“Despite the difficulties or disasters it may have inflicted on thousands of French people, the flood of January 1955 was actually more of a celebration than a catastrophe.”

-Roland Barthes

On Sunday night, JFK shut down after an imagined shooting in Terminal 8 that was really just loud celebration of Usain Bolt’s spectacular 100-meter dash. Shut down is an understatement—the airport exploded into hysterical chaos that only calmed down without consequence because nothing had actually happened. David Wallace-Wells, having just arrived at the airport from Denmark with his wife, paints a scene worthy of a Junkspace Hieronymus Bosch: multiple converging stampedes, TSA agents fleeing and sobbing, the utter breakdown of authority and process.

While it lasted it was a nightmare.

To experience the JFK shooting scare or hear about it later was to glimpse how fragile the megasystems we use daily have become and wonder how much worse a real emergency might have turned out. If you indulge your paranoid side like I frequently do and imagine the various ways that modern civilization’s many interlocking support systems might get hacked, what happened at JFK on Sunday will worry you. That one wasn’t even intentional.

Exactly 13 years before Sunday night, to the day—August 14, 2003—New York was the focal point of a better-known system failure, the Northeast blackout, a night that anyone who lived in the city at the time (I didn’t) will tell you great stories about. Yes, a few people died, more suffered, plenty of property and merchandise was damaged, but for everyone else, the night of the blackout was less terrifying and more thrilling, a welcome break from patterns that have grown stale, like a snow day in school.

While it lasted it was a party.

The closest thing I’ve experienced was another partial blackout, after Hurricane Sandy in 2012, that lacked the same euphoria because the extreme weather kept the same people indoors who were driven outdoors in 2003. Nevertheless, after Sandy I wrote that the hurricane and blackout had deprogrammed parts of Manhattan that had become boring and controlled, restoring a wildness to the urban environment that was fun to see, if only for a few days.


Riding the bus during the 2003 blackout (source)

Why was the recent JFK incident, in which nothing happened, so strange and menacing, while the 2003 blackout, that did real damage and even killed a few people, such a joyous moment for so many more New Yorkers?

The difference is the environment where each happened. As witnessed over and over again, airports are highly controlled, optimized and engineered, anxious places. When an airport stops working, there’s nothing to do except worry, or even panic. The same is true of enclosed institutions of all kinds, from hospitals to schools, where nobody wants to be any longer than necessary (and which Foucault observed were born of the same control techniques as modern prisons). These environments amplify the worst aspects of human behavior when they don’t work, and suppress them when they do work.

Other environments—the kind where people actually want to be—amplify the best aspects of humanity. Much of New York City, or any functional urban environment, falls into this category. The lesson of the blackouts, in fact, was that (for many) New York became a more humane and less atomized place with the power turned off, the very opposite of what happens inside airports. An electricity-free city is not something anyone wants, but as Barthes observed about the 1955 floods in Paris, it engenders “a world that is more accessible, controllable with the same kind of delectation a child takes in arranging his toys, experimenting with them, enjoying them anew.”

If there’s a simple lesson here for cities or institutions in “peacetime,” it’s that we should design them not to fail as spectacularly as JFK did on Sunday night. The streets of New York City, whatever problems they have, exhibit remarkable antifragility during natural disasters, and that quality should be nurtured and extended elsewhere.

For a more complicated conclusion we can look to Donald Barthelme’s short story “The Balloon” (which you should go ahead and read): the narrator constructs a giant balloon between 14th Street and Central Park in Manhattan. For 22 days it looms over everyday life, inspiring diverse reactions, including this:

“There was a certain amount of initial argumentation about the ‘meaning’ of the balloon; this subsided, because we have learned not to insist on meanings, and they are rarely even looked for now, except in cases involving the simplest, safest phenomena. It was agreed that since the meaning of the balloon could never be known absolutely, extended discussion was pointless, or at least less purposeful than the activities of those who, for example, hung green and blue paper lanterns from the warm gray underside, in certain streets, or seized the occasion to write messages on the surface, announcing their availability for the performance of unnatural acts, or the availability of acquaintances.”

Breaks in the mundane, then, are simply when we revert to our most human, whatever that is.

Pokémon Go

Late to the party, I have some thoughts about Pokémon Go:

1. The statement I’ve heard the most about Pokémon Go is that it just showed the world what augmented reality is. For those who already knew what AR is, Pokémon Go shows us what it actually looks like when it becomes relevant—not a sci-fi transformation to a world where we all stagger around with headsets (although we might still get that) but a world in which we all stare at our phones a little bit more than we did before. Too many of our predictions about the technological future assume visually stunning outcomes driven by advanced hardware (an industrial age bias), but those changes are almost always outpaced by the faster progress of software, running on devices we already have. The near future rarely looks different than the present.

Acknowledging that smartphones are how AR first goes mainstream—now a fact not a prediction—it isn’t a stretch to also acknowledge that reality has already been augmented for a long time via smartphones, just spatially rather than visually. Pokémon Go, because it’s a game, offers a more immersive and totalizing example, a more literal example of AR. Every smartphone app that already takes your location as input or interacts with your environment —Instagram, Yelp, Google Maps, Shazam, to name a few on my home screen—already read the physical world we inhabit and write to their digital representation of that world (not to mention Snapchat, whose filters are AR by any definition). Looking at your phone, gathering metadata about your surroundings from Instagram or Yelp, and then looking back up with a new perspective: That sequence is only one level of abstraction away from what Pokémon Go does.

I always imagined AR as the technology that would finally get us past the iPhone, enabling us to look straight ahead instead of staring at screens constantly. Eventually we might get the killer app for some Google Glass successor that gets us all wearing headsets, but until then we have to spend more time staring down at our phones in even more awkward ways. That bent posture—not shiny new hardware—is the visual manifestation of AR today. At least Pokémon Go is getting people out of their houses.


2. There’s another sense in which Pokémon Go is only a change in degree from the recent past, not a change in kind. As pointed out on a recent a16z podcast, Pokémon Go is an “appified game” instead of the more familiar “gamified app.” Everything from Foursquare to Snapchat to any social media platform that scores a user’s content by number of likes operates according to game logic and reinscribes that game on real life. Pokemon Go inverts this dynamic by making the fantasy the end instead of the means, and the reality the means instead of the end: Rather than looking for a bar to hang out at and incidentally becoming the mayor, we’re hunting for Vaporeon and incidentally hanging out in a park. Amazingly, the latter seems to motivate people more than the former.

Gamification has demonstrated its ability to simplify reality and compel behavior that would otherwise lack sufficient motivation. Pokémon Go suggests that games themselves might compel us even more than merely gamified things. Long before apps or gamification existed as ideas, society was full of more concrete but similarly playful (and powerful) game-like dynamics. Following this thread backward from the present, we can only conclude that it’s games all the way down.

Walled Gardens & Escape Routes

Slack and Snapchat are two of the platforms that best embody the current technological moment, the fastest recent gainers in Silicon Valley’s constant campaign to build apps we put on our home screens and not only use constantly but freely give our locations, identities, relationships, and precious attention. One of those products is for work and one is for play; both reflect values and aesthetics that, if not new, at least differ in clear ways from those of email, Facebook, and Twitter—the avatars of comparable moments in the recent past.

Recently I compared Twitter to a shrinking city—slowly bleeding users and struggling to produce revenue but a kind of home to many, infrastructure worth preserving, a commons. Now that Pokemon Go has mapped the digital universe onto meatspace more literally, I’ll follow suit and extend that same “city” metaphor to the rest of the internet.

I’m kidding about the Pokemon part (only not really), but the internet has nearly completed one major stage of its life, evolving from a mechanism for sharing webpages between computers into a series of variously porous platforms that are owned or about to be owned by massive companies who have divided up the available digital real estate and found (or failed to find) distinct revenue-generating schemes within each platform’s confines, optimizing life inside to extract revenue (or failing to do so). The app is a manifestation of this maturing structure, each app a gateway to one of these walled gardens and a point of contact with a single company’s business model—far from the messy chaos of the earlier web. So much urban space has been similarly carved up.


  Illegible space: the Bonaventure Hotel (source)

If Twitter is a shrinking city, then Slack or Snapchat are exploding fringe suburbs at the height of a housing bubble, laying miles of cul-de-sac and water pipe in advance of the frantic growth that will soon fill in all the space. The problem with my spatial metaphor here is that neither Slack nor Snapchat feels like a “city” in its structure, while Twitter and Facebook do by comparison. I never thought I’d say this, but Twitter and Instagram are legible (if decentralized): follower counts, likes, or retweets signal a loosely quantifiable importance, the linear feed is easy enough to follow, and everything is basically open by default (private accounts go against the grain of Twitter). Traditional social media by now has become a set of tools for attaining a global if personally-tailored perspective on current events and culture.

Slack and Snapchat are quite different, streams of ephemeral and illegible content. Both intentionally restrict your perspective to the immediate here and now. We don’t navigate them so much as we surf them. They’re less rationally-organized, mapped cities than the postmodern spaces that fascinated Frederic Jameson and Reyner Banham: Bonaventure Hotels or freeway cloverleafs, with their own semantic systems—Deleuzian smooth space. Nobody knows one’s position within these universes, just the context their immediate environment affords. Facebook, by comparison, feels like a high modernist panopticon where everyone sees and knows a bit too much.

Like cities, digital platforms have populations that ebb and flow. The history of urbanization is a story of slow, large-scale, irreversible migrations. It’s hard to relocate human settlements. The redistributions of the digital era happen more rapidly but are less absolute: If you have 16 waking hours of daily attention to give, you don’t need to shift it all from Facebook to Snapchat but whatever you do shift can move instantly.

The forces that propel migrations from city to city to suburb and back to city were frequently economic (if not political). Most apps and websites cost nothing to inhabit and yield little economic opportunity for their users. If large groups are not abandoning Twitter or Facebook for anything to do with money, what are they looking for?

To paraphrase Douglas Adams, people are the problem. As people, we introduce some fatal flaw to each technology we embrace, especially technologies that facilitate communication, and especially when they amplify some basic weakness in our nature. Almost always, the experience of using a technology can’t be regulated or moderated properly, some misuse of it becomes rampant, and that quality gradually or quickly drives its users to another platform that solves its particular problem. Then the cycle begins anew.

Slack is not the unbundling of another platform’s chat feature, then—it’s the reverse unbundling of email, an antivenom for email’s problems. The familiar version of unbundling is splitting off a feature from a product and building a more robust standalone product out of it. What I’m describing now is an equally powerful and prominent phenomenon in the evolution of technology.

Email, in work and in personal life, has strayed far from its origin as a joyous, playful technology that early adopters used to send one another jokes. It’s more essential than ever now, a supporting infrastructure for life in every sense, but it’s also something we feel the urge to hide from on vacation. We hate it. Email’s flaws are potent: Information lives forever; everyone has equal access to everyone else; spam marketers have optimized it as a tool for their nagging. Even the most powerful people in the world toil over email for an hour daily, while strategies like Inbox Zero have emerged to help us escape from under its burdensome weight.

Our uneasy dependence on email in professional and personal life created a massive opportunity for a tool that isolated its benefits and discarded its shortcomings. Slack embodies this opportunity. It offers freedom from the oppressive inbox, in which one owns everything that ends up there, and establishes a smooth space in which the most important information reaches its recipients indirectly but effectively. The streamlike work patterns enabled by Slack, which Venkat Rao calls Flow Laminar, “avoid the illusion of perfectibility of information flows implicit in notions like Inbox Zero altogether.”

Jenna Wortham, contemplating Snapchat in the NY Times, suggested that “maybe we didn’t hate talking—just the way older phone technologies forced us to talk.” Texting, she thinks, did for phone calls what Slack promises to do for email. She proceeds to praise Snapchat for its reverse unbundling of social media and even SMS: the escape from the coldness and flatness of text-based communication, the intimacy absent from Facebook and Twitter, the triumph of the stylistic over the literal. An essay by Ben Basche makes a similar point: “Perhaps the task of constantly manicuring a persistent online identity — of carefully considering what effect your digital exhaust will have on your ego — is beginning to weigh on people.” Traditional social media, it seems, has reached the point of maturity that email already attained: more rigid and less playful. We’re looking for escape routes and Snapchat is one.

If we’ve learned anything from recent technology, we can expect Slack and Snapchat to reveal their own serious flaws over time as users accumulate, behaviors solidify, and opportunists learn to exploit their structure. Right now most of the world is still trying to understand what they are. When the time comes—and hopefully we’ll recognize it early enough—we can break camp and go looking for our next temporary outpost.

The Human as Interface

Thoreau said in Walden that “we do not ride on the railroad; it rides upon us.” He was talking about the true cost of the ride—that while he embarked on a trip by foot and arrived at his destination that same day, you would have to first get a job and work for a day in order to earn enough money to make the same trip by train. “Instead of going to Fitchburg, you will be working here the greater part of the day. And so, if the railroad reached round the world, I think that I should keep ahead of you.”

Generalizing his assessment, Thoreau found that so much in the modernizing world bore a similar cost. The railroad and the industrialization it embodied was a powerful enough force to remake society in its image because of the demands placed upon us to build, maintain, and operate those systems, as well as our willingness to accept those demands in exchange for speed and efficiency.


 People helping computers talk to people (source

It makes sense why all of that heavy infrastructure weighs us down, why it’s such a chore, just like Thoreau’s image of a train driving on tracks made of people makes such vivid sense, or why a household cluttered with stuff feels so oppressive. But haven’t we escaped the era of stuff and ventured into the era of information, finally mastering the former? Software shouldn’t be able to “ride upon us” like industrial machinery can. If anything it should be loosening the physical world’s grip on us (and in many ways it is). Information storage is basically free and processing power increases exponentially and yet we don’t feel certain that we are freer than we used to be, or any less burdened. Why does email now ride upon us like trains once did?

Commenting on artificial intelligence as it reaches a tipping point in its maturity, John Robb recently observed that “we can’t even design systems that work for human beings”—that is, we’re designing AI as a godlike force that works in mysterious ways, not a true agent of our own objectives, and ensuring that we will somehow bow to it, just like we did to the industrial behemoths that we built in a previous era.

Put another way: Every medium-sized company with a competent customer service operation automates a large chunk of that work. When you call an airline or a credit card company, you pass through a tree of often-frustrating multiple-choice menus before getting your issue resolved. You only get to speak to a live operator after exhausting the menus’ abilities. That process of escalation is the Human as Interface—a reversal of roles.

The Human as Interface is the troubling but darkly funny outcome of our white-hot progress in the digital realm. An interface is traditionally a point of contact between people and computers (or between hardware and software, or two separate software systems) that eases their interaction and translates between two modes of communication.

Software is eating human work so fast that there’s less of a role for interfaces between humans and computers, as the latter can finish more and more of their work without humans dropping in partway through the process to guide them. At the same time, that software is doing even more of the jobs that humans used to do and eliminating the need for those jobs. Finally, the various activities that computers do are becoming so sophisticated that humans can’t only not understand them, we don’t even have a language for describing them. The gap between human and computer abilities is either closing or widening, depending on how highly you regard humans, and there’s a shortage of a different kind of interface or API: the kind that mediates between software and its human users in the transitional phase before a computer can handle that step too.

Thus, machines need people to translate between themselves and their users—the Human as Interface. This is a form of turking, in the sense that it’s yet another role humans only fulfill until software learns how. This type of work is found at every ability level: Customer support reps who handle the overly complex issues that automated systems escalate. Convenience store employees who help customers get unstuck from the self-checkout machines that replaced all the other employees. Explainers who can communicate to a broader audience a concept like machine learning and why it matters. IT help desks.

It’s surely a sign of increasing economic polarization that a small percentage of specialized individuals build and run the advanced systems that transform everyone else into a user in both their work and their free time. For this majority, their jobs await imminent automation, at best, and already function as an interface for machines otherwise (meanwhile everyone’s a user of some kind in their free time). No matter the reason for this condition, it’s hard to pretend that we don’t somehow work for computers, or that software doesn’t ride upon us as heavily as the railroads did.