New York City did the seemingly impossible this month, activating free WiFi and cellular service throughout its underground subway stations. In 2017 it didn’t seem like any filthy corner of everyday life still remained for the internet to invade and conquer, but that’s because we were so accustomed to not having service in the frantic pathways of the Union Square station as the morning-commute cannon fired us through them every day. Useless now is the metis by which each of us navigated our microwindows of access on subway rides, jacking in just long enough to refresh our email inboxes or fire off the message that we’re running ten minutes late. For residents of New York, airplanes are now the last bastions of disconnectivity, along with deep woods excursions and Faraday cages.
Activating WiFi service somewhere new is an obvious benefit for all involved. Even more obvious is that it’s inevitable. How did any urban space get this deep into the millennium without it? More than anything, though, it’s boring: I haven’t felt a personal shortage of connectedness in years and would jump on a plane just to hide from it, though I know many don’t share this attitude. Today, there’s no need to explain why New York subway stations need WiFi—it’s axiomatic that every inch of the world needs it.
The last mile before Link (source)
A more interesting project along these lines was LinkNYC, New York City’s effort to blanket the city with free public WiFi above ground and replace thousands of decaying pay phones with shiny kiosks for web browsing and device charging. The opportunities for increased surveillance are clear, but that’s OK because we should all feel like we have nothing to hide, except that those who should feel like they have something to hide tarnished the experiment by openly watching porn at the kiosks. In general, the demographic for whom the kiosks would be most useful, those less able to afford internet access in other domains, seemed to catch the most heat for actually using them, evoking Anatole France’s observation that “in its majestic equality, the law forbids rich and poor alike to sleep under bridges.” As for surveillance, if LinkNYC doesn’t monitor our movements, someone else will, so it’s hard to complain.
The obviousness and inevitability of the internet becoming more and more ubiquitous gets a fascinating treatment in Venkatesh Rao’s “Fortune at the Edge of Network” thread. He explores the concept of the “last mile”—the messy part of any network where efficiency dwindles as the end user actually makes contact with the network (for example, the UPS delivery guy parking the truck and running the package to your doorstep). The last mile is where a closed, streamlined system encounters the real world, and it’s the home of weird and archaic structures and behaviors that the network’s core would never tolerate.
As various infrastructural systems, but especially the internet, extend their reach, the last mile becomes the last hundred feet and then the last millimeter, Rao writes—the distance between the iPhone screen and the eye, or even less. As these systems colonize more space, they extend rationalized market forces into territory previously ruled by social norms. If this sounds sinister to you, he’s fairly optimistic, or at least fatalistic: “It isn’t between free individuals and an enslaving techno-capitalist cloud. You never were that free an inch from your face. You were merely the captive of non-economic forces.” Rao sees opportunity at this junction: the chance to inhabit and rule your own last mile, using technology to increase your own agency rather than resisting its inexorable creep or trying to hide from it. More so if the non-economic forces you traded away were particularly oppressive.
I haven’t yet embraced Rao’s advice: I entered the new year with the goal of reclaiming some scraps of my immediate space from digital/rational/market logic, purging half of the apps from my phone and trying not to hold it in my hand at all times. For example, I deleted Shazam. If I hear music in public, I thought, I’d try asking someone what they’d put on instead of antisocially querying the cloud. The opportunity to test this approach came last week at a coffee shop: I heard some jazz I liked and asked the barista what she was playing. “A Spotify jazz playlist” came the monotone reply and bored look. Thanks for nothing. So it’s possible that the space recovered by pushing back the networks isn’t always so great. Or maybe we’ve already hollowed it out beyond repair.
“Dark Euphoria is what the twenty-teens feels like. Things are just falling apart, you can’t believe the possibilities, it’s like anything is possible, but you never realized you’re going to have to dread it so much.”
The short twentieth century began in 1914 but the spiritual twentieth century started six years earlier when Filippo Marinetti lost control of his Fiat and plunged it into a muddy ditch outside of Milan, forging the Futurist manifesto’s introductory myth and launching the movement of the same name. Futurism, a pivotal moment in design modernism, celebrated the raw power of the car that Marinetti crashed and the many other fruits of the industrialism then transforming Italy, its adherents eager to discard the past and all its limitations. As Reyner Banham described Futurism, “the poet, painter, intellectual, was no longer a passive recipient of technological experience, but could create it for himself,” and Boccioni later proclaimed their cohort “the primitives of a sensibility that has been completely overhauled.” A new type of person was being born amid the roaring engines and factories.
Futurism, for all its shortcomings, was an admirable moment of raw exuberance about the potential of technology to make the world more interesting. The futurists celebrated machinery for its romantic possibilities and “universal dynamism” and rightly saw that it could be beautiful, not merely functional. You can imagine the ways this attitude (and the technology it celebrated) can and did eventually go wrong, but the Futurists dared to demand more from machines than economic production or convenience and in that sense their movement was a human triumph and an example to future generations.
Boccioni, The City Rises (link)
A century later, the affluent, well-educated, city-dwelling beneficiaries of the information economy—now more easily grouped by the recent election’s fault lines—enjoy a more cautious, productive optimism about the technological progress of the internet age. Or did until recently. That broad optimism, so solid even a year ago, is one of the reliable constants that 2016 has upended. The list of reasons for growing wariness about today’s digital landscape includes almost nothing new: Twitter has become a snakepit of harassment and unchecked hate speech. Facebook blurs the line between biased truth and outright falsehood. Almost anything connected to the internet can and will be hacked. The platforms and apps we use the most have evolved into the most effective surveillance infrastructure imaginable, perfectly gift-wrapped instruments for a totalitarian regime. Most of our jobs will be automated out of existence just as America dismantles its remaining safety nets. And even the exponentially-growing internet has an insatiable appetite for fossil fuels that could make everything else irrelevant by submerging us all underwater.
Somehow, the past year—the election in particular—revealed how each of those risks could potentially blow up. And some of them did blow up. The internet doesn’t feel as fun today as it did when we first applied an Instagram filter or read a @Horse_ebooks tweet. To those of us too old to get it, Snapchat looks like fiddling while Rome burns rather than another platform for unbridled 2011-style internet frivolity. The technological progress that’s still capturing popular enthusiasm is largely either the residue of mid-century speculation (space travel, self-driving cars) or the consumer-facing tips of foreboding icebergs like AI and automation (Alexa, Amazon Prime).
The digital foundation onto which civilization has migrated, we’re finally acknowledging, is more fragile than we thought, though it continues to bring us countless real benefits. Even Jane Jacobs predicted an imminent dark age and, surveying our recent cultural shift and its causes, it’s getting easier to see how formerly unthinkable dark ages set in. Again, the election solidified the narratives and camps: Some were already in revolt against the so-called cutting edge of technology and its eagerness to automate everything. Today more people are, for many of the above reasons. We thought we’d arrived at the end of history but now worry we’re just at another inflection point of its eternal cycle. In a year when something called a phishing attack is one of the major stories from a presidential election, it’s at least certain that things are getting weirder.
So we’ve trapped ourselves in a technological stack that unsettles us, but just as bad, we’ve built one that’s boring and prosaic and even cowardly, one that becomes most exciting when it fails. I came across a perfect statement of where we’ve ended up, by Geoff Manaugh, in the comments of a Kazys Varnelis blog post:
“(T)he people today most concerned with building flexible, just-in-time, climate-controlled interiors in which You Can Do Anything™ are less often swinging nightclub owners and far more likely Big Box retailers, with their clip-on ornaments and infinitely rearrangeable modular shelves and their themes of the week. There are already Christmas decorations up at Ikea. Similarly, the people building instant cities today aren’t the Balkan ravers of the 1990s (at least no more); it’s Camp Bondsteel and the logistics support teams of Bechtel. Or, for that matter, it’s the “megaslums.” Either way, it’s not a leisure class of hi-fi-owning Jimi Hendrix aficianados.”
If anything, the Silicon Valley version of technology and progress is too bland and too conservative, because it needs to scale, optimize, add value—buzzwords as dull as the results they produce. There’s a tech industry aphorism that companies ship their org chart (a version of Conway’s Law), a more concrete way of saying people recreate the milieu in which they live and work—which in this domain involves a lot of Excel spreadsheets and PowerPoints. We’re thus experts at extracting convenience and consumer value from technology but worse than we should be at using it to explore unknown territory, have an adventure, understand ourselves, or even throw a better party.
David Graeber famously complained that his generation was promised flying cars, as well as force fields, teleportation pads, Mars colonies, and every other exciting sci-fi trapping, but instead got bureaucratic tedium and screens to stare at. A popular counterargument is that iPhones and the internet are, in fact, more amazing than flying cars, but that rebuttal sidesteps Graeber’s broader point that poetic technologies, “the use of rational and technical means to bring wild fantasies to reality” (the Futurists’ bread and butter) are becoming increasingly difficult to pursue. Not because we don’t know how, but due to a failure of culture and will. The Steve Jobs dictum that customers don’t know what they want until you show them reflects a belief that individuals don’t achieve their full potential until they become customers. One of Apple’s most masterful accomplishments was concealing that bait-and-switch.
Our society continues to lose the thread: Not only are we not building flying cars, we’re instead building hardware and software that increase our fragility, anxiety, and dissolution, even if they do streamline shopping and get our laundry picked up faster. We could use a new Futurism, a recalibration of the ultimate purpose of all this work we’re doing, something better than convenience and efficiency. Give the Balkan ravers a shot. We’re more technically capable than ever and can build whatever we want, supposedly, so if we’re going to keep trying let’s build something that’s beautiful or weird or something that increases our collective freedom.
One example of such an alternate reality, already 50 years old, is New Babylon, the Situationist utopian city/megastructure. It’s the kind of model we should keep creating to remind ourselves of the futures we could be pursuing but aren’t. Acts of imagination like this, fantastical as they may be, represent a step toward realizing the poetic technology we’re currently missing, as McKenzie Wark writes:
“Owning property affords someone a house in which to be at home, at the price of being homeless in the world. Dispense with property, dispense with separation, and the feeling of being merely thrown into the world goes with them. Our species-being can give vent to its wanderlust, at home in a house-like world. Constant thought modernity was already accelerating a return to a nomadic existence. New Babylon is nomadic life fully realized. It is an architecture of duration, of thresholds, of collaborative place-making, writ large. Freed from the fixity and uniformity of property, space could again have its qualities. A short trip in New Babylon should offer more variety than the most interminable journeys through the concentrated city of spectacular society. “Life is an endless journey across a world that is changing so rapidly that it seems forever another.” The New Babylonians could wander over the whole surface of a world that was in flux. “New Babylon ends nowhere (the earth is round).”
Flying cars are probably a bad idea, and megacities cantilevered above the earth probably are too, but there are a thousand other desirable and already-possible technological outcomes that we’re failing to imagine or seek, beyond the pale of what we already have.
Cars belong near the top of a long list of reasons why America is the way it is, but one American quality I’ve never heard attributed to cars is our increasing casualness of dress, which seems to have mirrored our impulse to drive during the past century. There’s no obvious connection between the two phenomena, yet whenever I leave New York for a more American part of America I remember that the city where we drive the least is also our least casual and I wonder if cars are somehow the cause.
Sweatpants, flip flops, sneakers, t-shirts, and baseball hats have pervaded nearly every realm of life except for weddings and funerals, slowly conquering former bastions of formality like the workplace. Technology has been a factor: Watches are mere jewelry now thanks to the digital clocks that accompany us in our phones, but more broadly than that specific side effect, all technology, in a sense, clothes us, augmenting our natural faculties and our bodies. Clothing itself is a technology—Marshall McLuhan called it an extension of our skin that stores and channels energy, an increasingly tactile shell that (especially in America) overthrew the more visually-oriented attire of aristocracy.
For McLuhan, clothing and housing are two different layers of our technological exoskeletons. The city is yet another layer. If the human body moves through the world encased within a protective stack including these components, surely the car has a place in that stack as well, somewhere between the clothing and city layers. Furthermore, we should observe significant cultural differences where any of the layers is minimized or absent altogether, with adjacent layers intensifying to compensate. This is my theory of clothing’s amplified role in New York, where the car layer is anemic compared to elsewhere. When I visited my hometown of Indianapolis over Thanksgiving I didn’t bring a coat with me, knowing that I’d spend all my time in my house or in a car, save for short walks across parking lots. “My car is my coat” was a dumb joke I made. I found myself wondering why anyone who has a car would bring their coat on most errands, or even own an expensive one for daily life.
Archigram’s “Walking City” (source)
The more layers that encase us, the less is demanded of our bodies themselves. To follow this dynamic to its logical conclusion is to end up with inert humans hibernating in the fluid-filled pods of The Matrix, naked and fully immersed in an advanced technological stack, wrapped in the multiple layers of protection it offers. Wearing sweatpants and spending whole days surfing the internet is not entirely different from that extreme scenario, while traditional urban fabric seems anachronistic by comparison: Walking outdoors on the streets of dense cities, we’re vulnerable and suboptimized, clad in boots and coats rather than temperature-controlled pods of the automotive or Matrix variety.
If cities are an outer protective layer in this ecosystem, on the other hand, maybe we’re not so vulnerable after all—see Matt Jones’ 2009 blog post “The City is a Battlesuit for Surviving the Future,” arguing that the outer urban shell is the most important layer of all. The post’s title refers to Jack Hawksmoor, the protagonist of Warren Ellis’s comic book The Authority, who wraps himelf in the city of Tokyo to fight a sentient, time-traveling version of 73rd-century Cleveland. Jones observes that we increasingly wrap ourselves in the city as a defense against all the forces of nature that have assailed humanity throughout history, and in the networked present and future, this can become more true than ever. Jones writes, “The city of the future increases its role as an actor in our lives.” A stronger outer shell, the city, might then enable a more humane life within, while a compromised city layer shifts its functions onto the house and car layers, dividing its inhabitants into more atomized enclaves.
The networked city Jones describes is a different animal than its forebears, and is somehow more tactile than even McLuhan anticipated. We now inhabit the meatspace city, whose previous functions of information exchange have increasingly migrated to digital channels (which are, of course, embedded in its physical fabric). The features of traditional urbanism most likely to intensify under this new regime, to the extent that it continues spreading, are the most difficult to encode: eating, drinking, shopping for specialized merchandise, and the more precious types of human interaction. The meatspace city is a construction of affluence and is far from ubiquitous—it might never be—but even it presents a more comfortable and convivial interior than the car in the suburban wilderness. We’ve lost some of the optimism about networked urbanism that we enjoyed in 2009 when Jones wrote his battlesuit piece, but many of the reasons for that loss are pervasive and only most visible in cities, which are still better armor than most of the substitutes we’ve tried.
As a civilization, we entertain plenty of myths about the way we never were. One of the most attractive is that we were born into a world unmediated by technology, a pure state of nature, before we gradually enhanced or corrupted that world with our inventions over the millennia that followed. From the simplest tools onward, of course, we’ve never been without technology: A lightbulb, a lit match, and a painting on the wall of a cave are all “technology,” extending our bodies, changing our brains, and remaking the environments we inhabit.
If we never had a clear, pure image of the world around us, that image was at least relatively stable even recently, if still distorted, shaped by all of the technologies that Marshall McLuhan called media: books, newspapers, radio, television, music, religious ceremony, language itself. In our hypothetical state of nature we may have apprehended what mattered directly—an approaching storm or a source of food—but that perception has since been wrapped in increasingly complex layers of language, imagery, and context—narrative—most of which was created by other people and transmitted to us, usually with the goal of influencing us somehow. We are narrative-loving animals, and we’re without narratives less often than we’re without clothing.
Like clothing, we’re also much more comfortable with narratives than without them (at least outside the house). We are terrible at apprehending reality directly so we constantly grasp for some simplifying context anywhere the possibility arises. Narratives are thus immensely powerful: Everyone needs them, and the people who create them program minds, to various degrees. Narratives are also extremely dangerous: Every destructive mass movement in history—Nazism, Stalinism, Maoism—has fueled itself with them.
Fortunately for all of us, narratives are difficult to wield individually. First, there’s a huge capital requirement: Every narrative requires a distribution channel. Do you own a media conglomerate or have a million Twitter followers? Second, and more importantly, our environments are full of information, much of which doesn’t fit a given narrative and some of which may even contradict it outright. If you read that all dogs are brown in the morning paper and see a white dog as soon as you leave the house, that’s a problem for that narrative. Good thing your environment doesn’t respond to your morning news, or even worse, collude with it.
But what if it did? What if the newspaper and the street you stepped out of your front door into were facets of the same underlying system, a system that generated its output in both domains using the same logic and rules? In this scenario, the propaganda you just finished reading in the paper populates your surroundings as soon as you finish. Now all dogs are in fact brown.
This depraved Matrix is our emergent reality and Facebook is its foundation. Facebook’s long-existent fake news problem has risen to urgent prominence since the presidential election for its apparent role in stoking pro-Trump sentiment. If Facebook were just a “news website,” a digital version of some mid-century media form, there would be no problem: It would have lost credibility as a legitimate source of truth and repelled the users who recognized the unreliable nature of its content. But Facebook is an entire environment, a platform where many people live surprisingly huge portions of their lives, a public space as well as a newspaper, radio station, message board, and plenty else, condensing the multiplicative power of all of those media into its firehose blast (and in this broad sense Facebook is correct to argue that it’s not a media company). Ben Thompson characterized Facebook’s power as follows: “The use of mobile devices occupies all of the available time around intent. It is only when we’re doing something specific that we aren’t using our phones, and the empty spaces of our lives are far greater than anyone imagined.” By bundling together as many things as possible that the internet can offer, Facebook has ensured that we’ll come for something we care about and stay for the incidental bullshit, for hours on end.
A contemporary human habitat (source)
Again, Facebook is an environment in every sense but the physical, so its relationship to narrative truth and falsehood is more complex and powerful than anything fitting within our narrower definition of “media.” We immerse ourselves in Facebook to a fault, check in with it throughout the day, and even fall asleep with it. To compare it to an agora or neighborhood street is to underrate it, because it’s so much more of a public space than those physical places usually are today. Even worse, at least for the goal of curtailing fake news, Facebook responds to us, giving us tools to amplify the content we find emotionally resonant. Because we are narrative animals, this tends to be content that aligns with the existing narratives our brains are primed to accept. We like and share first and verify later, if ever. Rich narratives get richer and poor narratives get poorer, withering away unnoticed. That narrative snowball effect parallels the financial transformations of recent decades, also enabled by unrestrained digital technology, and the 2016 election was its trillion-dollar flash crash.
The fundamental problem with Facebook, then, is that narratives of all kinds get traction more quickly in digital terrain. The physical world is simply hard for individuals to control, especially where information and culture are concerned, but the digital is optimized for focusing and channeling and manipulating information—by definition. We’re not used to the power that can be exercised over narratives in these new spaces. The merging of media content and environment is another phase of what McLuhan called the retribalization of mankind, and we’re discovering that the negative connotations of that euphemism are as true as the positive.
Many have suggested solutions to Facebook’s fake news problem in the past week: internal changes to company policy or externally-imposed rules. It’s possible these solutions would enmesh us in additional layers of algorithmic opacity, with new imperfect algorithms deployed to fix the flaws wrought by today’s imperfect algorithms. Andy Warhol defined art as whatever you can get away with, and any platform enabling its users to create and share their own content will be rife with experiments in what they can get away with. Fake news is likely a feature of these platforms rather than a bug. McLuhan’s most famous dictum was that the medium is the message; perhaps the blurring of fact and fiction is inherent to Facebook, and we can best negotiate that by actively reducing the platform’s grip on our personal and public lives.
The PlayStation game No Man’s Sky promised a revolution for its medium before its release two months ago, getting attention from gamers and non-gamers alike for its “procedurally-generated universe” in which a single 64-bit seed number could randomly create 18 quintillion distinct planets via algorithmic logic, each replete with its own weird flora and fauna. The space explorers playing the game would effectively create each planet upon discovery: Arriving somewhere undisovered would spur the procedural generation of a random, new, and hopefully fascinating world. It was going to be a major step toward humans getting out of computers’ way in yet another domain, after giving the machines sufficient instructions to make 18 quintillion of something that other people would actually want.
Not surprisingly, No Man’s Sky was boring. Its beautiful graphics couldn’t overcome the fact that, on one planet randomly selected from the infinite possibilities in the procedurally generated universe, nothing was happening. The variety among these planets was shallow and nominal, the 99.999% virgin territory untouched by any hand that could form it into something interesting. As one reviewer wrote, “There are no grand civilizations sequestered somewhere in this galaxy with Turing Test-passable aliens waiting to wow us with riveting conversation.” The procedural generation process, additionally, means the only parts of the universe that exist are the ones you see—a solipsistic vision of reality that is, again, boring.
Maybe some of the No Man’s Sky planets did end up with compelling advanced civilizations and weird creatures, but they’re too hard to find. Every baseball season has roughly 24,000 games and if you watch a random handful of these you’ll find them boring as well (unless you go to the games and sit in the sun and eat hot dogs and do everything but watch). Each baseball game is somewhere on a bell curve of expected outcomes so a single randomly chosen game probably won’t yield a no-hitter or two grand slams by one player in the same inning.
The randomness of baseball is more interesting than that of No Man’s Sky though, because it’s wrapped in the context of an existing culture and infused with meaning from that culture. There are also other ways to make baseball (slightly) more interesting for yourself: Become a fan of one team whose 162 games will excite you more than the other 2,268. The 70 to 90 games that your team wins will especially excite you, and the ones with lots of home runs even more so. Or you could adopt a different strategy and only watch the postseason, in which every game matters and is inherently nonrandom.
What a procedurally generated planet looks like
What a random universe really needs is editing, in other words. 18 quintillion of something is great for advertising copy but terrible for experience. Some work by other people is still required to make a massive procedurally-generated universe interesting, to put some meaning into it—a map or search mechanism to guide players toward the best parts; a cultural context that imposes meaning on the existing randomness; or a few planets created by human hands. Most games that painstakingly create one world are better than this game with its quintillions of mass-produced worlds.
The 55 snapshots of imaginary cities in Italo Calvino’s 1972 novel Invisible Cities are the opposite of the ennui in No Man’s Sky. Each fantastical city that Calvino’s Marco Polo recalls from his world travels, while also invented (and generated by Calvino’s “rules”), is rich with the meaning and magic that No Man’s Sky promised but couldn’t produce. Calvino’s own work—the brilliant imagination that enabled him to craft each city’s description as well as the editing that removed the meaningless noise in between those vignettes—is why his 55 worlds will outlive the 18 quintillion in No Man’s Sky. 55, it turns out, is plenty.
Here’s an assortment of the best things I’ve read on the internet lately, none of which were published in the past week:
Walking to the mall: Ten years ago Tom Chiarella published this essay in Esquire about, well, walking to the mall—in Indianapolis, where walking to the mall makes even less sense than elsewhere. He imbues the journey with an epic quality, stretching it out in time, without forgetting the absurdity of what he’s doing or the nondescriptness of shoulders and embankments meant to be seen at 40+ miles per hour, if at all. In short: You’re not supposed to walk to the mall. A perfect description of the suburban carscape and a nonfiction companion to Ballard’s Concrete Island.
Wardriving: Some family in rural Kansas is being terrorized by strangers because their farm occupies the default geographic coordinates for IP addresses with unknown locations (more of what I described as digital NIMBYism). Companies like MaxMind apparently compile computers’ locations in online databases for sale to advertisers. One of the techniques for collecting that data caught my attention: wardriving, or sending cars driving around to physically collect IP addresses from open wi-fi networks. I keep picturing armadas of vehicles roaming small towns in middle America like the darknet version of Google’s Street View cars. In case you were wondering, the term wardriving is a reference to the “wardialing” done by Matthew Broderick’s character in the movie WarGames.
The Nostalgic Comfort of Normcore Dining: In the search-don’t-sort augmented reality that Google/Yelp/Foursquare ushered in, being ordinary is the only way to hide. I didn’t understand what normcore was until I read this.
Concrete island (source)
Our Brand Could be Your Crisis: One of the best pieces I’ve come across in a while. I still haven’t seen the Zac Efron movie, We Are Your Friends, that Ayesha Siddiqi reviews here (I’m going to!) but she accomplishes the impossible, writing a thoughtful—brilliant, really—essay about that dreaded topic, the millennial. Required reading for anyone wanting a better grip on the current zeitgeist, the one you and I are too old to understand, just like Snapchat. Siddiqi also sees in the film the cultural evidence of our slow, ongoing economic collapse, which manifests itself in such subtle ways (what Bret Easton Ellis calls post-empire). “We can invent an app, start a blog, sell things online” could become a mantra for all of us. I’ve been looking for a way to build a longer post off of ideas embedded in this essay, but until then I’m stashing it here. Read it.
Walmart: Last month Bloomberg ran this darker companion to the above essay. This is a truly dystopian look at how much crime happens on Walmart’s properties and the problems that crime creates for the local police forces that have to deal with it all, not to mention the cities from which the megastores have carved out a big privatized chunk. Corporate commercial space is not public space. Enjoy your weekend!
It’s increasingly obvious that Twitter as we know it will stop existing before long. Maybe Facebook will buy and dismantle it; maybe it will successfully turn itself into the profitable ad-friendly platform that all of its users dread (it won’t); or maybe it will just disappear, bleeding away its remaining users as it’s already been doing until there’s nothing left but bots and clueless self-promoters and hateful egg avatars with ten followers each.
Twitter has already embraced the algorithmic feed, which is as shitty as expected, and it will further relax the 140-character tweet limit next week. Having shed its two definitive features, Twitter will become a worse Facebook timeline, recognizable only by its inability to curtail trolls and harassment.
The traditional shrinking city (source)
I wrote in February that Twitter was a shrinking city but now it’s a city in full collapse. The parallels abound: a growing presence of unchecked dysfunction; an exodus of permanent citizens along with their economic contributions; the creeping presence of opportunists who hope to buy up its valuable parts and trash the rest; the sense that it was a better place back in the day.
One way or another, you (if you still user Twitter) and I will probably have to leave Twitter eventually. This is a true tragedy—many of us only talk to one another on Twitter and could never have formed certain communities without it. Like every collapsing community, Twitter is sure to further debase itself before finally forcing us all out, ensuring a messy exodus.
We should all keep in touch. Let’s decide now where we’re meeting up after Twitter dies. I suggest we meet in Zuccotti Park. If we’re lucky @dril will show up.
We should meet in Zuccotti Park because the internet isn’t the free outlet or the escape from physical constraints that it once was. Occupy Wall Street celebrates its fifth anniversary this week, and five years is a long time. In 2011, Twitter was cool—cool enough that it could function as a support system for a movement like Occupy. Now, Twitter is dying because it can’t survive in an ecosystem that requires it to grow profitably, and the internet is no longer a mainstream outlet for overprogrammed, corporate urban space, but more and more a mirror of that space, which forces out the weird and the unmonetized.
Now, more than five years ago, a place like the Zuccotti Park of Occupy Wall Street feels like a haven from the internet’s panopticon, maybe still a place to make a noise, but not a noise that the internet would reliably amplify. If Twitter continues its decline, there will be few digital spaces left that do what it did in its prime, but maybe physical space can again.