The World You Save Won’t Be Your Own

**WARNING: THIS POST SPOILS THE OUTER WILDS QUITE BADLY. DON’T READ THIS IF YOU INTEND TO PLAY THE OUTER WILDS**

The Outer Wilds, a relatively recent release by indie game producers Annapurna Interactive and Mobius Games, earned enough press attention and critical acclaim that I decided to spend some of my decidedly limited gaming budget (both in terms of time and money) on it. During the course of playing The Outer Wilds, there were times I found myself regretting this decision, as it could be a deeply frustrating game. Ultimately, however, I found The Outer Wilds one of the most rewarding games I’ve ever played. Reaching the end of The Outer Wilds (which I will utterly spoil here, last warning) felt deeply rewarding, like the well-earned end of a carefully crafted story.

That story begins with a simple, well-crafted hook. You are an astronaut from a cartoonish alien species living in a solar-system of tiny, quirky planets. Your mission is to use a translator device to discover the secrets of an extinct alien race that inhabited your Solar System long ago. Twenty minutes into your mission, the Sun explodes, destroying everything. You, however, wake up twenty minutes in the past, starting all over again. The goal seems clear enough, solve the mystery of the time loop, stop the sun from exploding, save the world.

That simplicity is a bit of an illusion. Along the way is a fair amount of often frustrating puzzle solving. A big driver of this frustration was the game’s stubborn refusal to engage in the usual “leveling up” mechanics of most contemporary games. You never get better, more powerful equipment in The Outer Wilds, you never learn any new skills (save one, a fellow astronaut, the only one who is also aware of the time loop, teaches you how to “meditate until the next cycle,” effectively letting the player skip ahead to the next loop without actively committing in-game suicide) you never find any crafting recipes to create cool new stuff. You just solve puzzles, often times via fairly difficult platformer-esque jumping, dodging and maneuvering.

The decision not to allow the player to auto-magically become more powerful over time was a clear rejection of the ideology of contemporary games on the part of The Outer Wilds. I respect that decision, even though it almost made me quit playing several times. The fantasy of endlessly acquiring power and gear and levels is a decidedly late-capitalist one, as it encourages the player to value a logic of accumulation and domination. That said, the pleasures of this fantasy are all the more apparent when it was taken away. As I played The Outer Wilds and struggled to complete finicky platform puzzles with my decidedly middle-aged hand-eye coordination I often found myself thinking “I struggle all day to be a good teacher, a good husband, a good father, a good adult, now I am spending my down time struggling to be good at a video game? Bring on the mechanics where I line up my cursor on a lizard man and click repeatedly to become and expert swordsman!”

But then, it’s exactly this rejection of linear progress that The Outer Wilds enacts at every turn, and the way in which this rejection connects deeply into the thoughtful underlying plot makes all that frustration ultimately worth it. You set out at the beginning of the game trying to solve the mystery and save the world. Of course you do, that’s what one does in a video game. The game helpfully provides clues that seem to lead in just this direction. Here you find evidence that the ancient aliens were experimenting with time travel, and here you discover that they built a sun-exploding device to try to power their search for something called “The Eye of The Universe,” a mysterious signal older that the Universe itself, which they apparently revere as a religious mystery.

Ultimately, though, these clues are all red-herrings, at least in the sense that they do not empower you to save the world. The sun-exploding device actually never worked. Instead, the alien scheme to hunt for the eye of the universe sat dormant for millenia, until now, when the universe’s time is at hand and the stars are going out. The sun is exploding of old age, and it’s triggered the eye search time loop in a cosmic mistake. As a player, the end of the game involves you discovering the means to travel to the eye, where you experience a moment of stillness in a game that has otherwise felt frantic (from the get-go the game offers you the opportunity to sit around a camp fire and just roast marshmallows, but during actual game play it felt absurd to take that opportunity because the world is gonna end in 20 minutes, who has time! At the eye, you get this opportunity again, and now, why not?) and then witness the death of your universe and the birth of a new one. The last moment has your alien astronaut floating alone in space, dying, watching some new thing explode into being.

It’s this subversion of the “save the world” trope that, for me, felt so satisfying and thought provoking. The notion of “saving” the world, setting things back just the way they were, is ultimately a conservative one. Moreover, it’s an impossible goal, at least for us mammals. Some sort of sentient, immortal bacterium might rightfully imagine the stasis of a saved world, but we can only ever accept that our world will end, and we will launch the next one on its way as best we can.

Stop trying to save this world, nurture the next one, and accept it won’t be ours. This seems a fitting message for 2019, and I was glad The Outer Wilds gave me a moment to reflect on it.

Warren, Sanders, and American Left Politics in the Age of “Authenticity”

In the wake of Elizabeth Warren releasing a plan for a staged, gradual implementation of Medicare-for-All I’ve been seeing increased concern about her candidacy in some corners of my twitter feed and wanted to weigh in.

Actually, I mostly didn’t want to weigh in, given my sense that our current moment is sectarian enough that I may offend people whose opinion I respect by voicing my ideas in public, but I’ve decided to write this anyway.

One thing I notice is that when Warren backs away from or limits her commitment to the most aggressive version of left-leaning policy (as she has now has with Medicare-for-all) critics seem to read her deviation through the lens of a particular narrative about Barack Obama. In this narrative, the reason the Obama administration was unsuccessful in achieving a progressive policy agenda is that Obama was insincere about his politics in 2008. Obama, in this narrative, may have run as a leftist but was really a centrist in his heart. He failed to achieve left goals in office because he never really wanted to achieve left goals. He was really a Trojan horse for the status quo. When Warren seems less than fully committed to Medicare-for-All or the Green New Deal, these critics seem to suggest that she too is trying to fool leftist voters into supporting a centrist agenda of support for the status-quo.

For me, this narrative about the Obama administration is not very convincing. For one thing, it seems obvious to me that, while the ideal set of policies Obama might have wanted to pursue might have indeed been a bit to the right of the policies you or I might prefer, the actual outcomes of his administration were set not by those ideals, but by the massive GOP resistance he met with after the Dems lost control of the House in 2010.

More importantly, my memory of the 2008 election is that what excited us about Barack Obama was that he was running, not as a leftist, but as an authentic outsider someone untainted by the Democratic Party Establishment and in particular by association with the disastrous war in Iraq. Someone who was more genuine and less scripted than traditional politicians. Someone who could deliver “change” (to cite half his slogan) because he was not beholden to the existing establishment.

Ultimately, this “outsider” frame seems to me more harmful to the Obama administration’s ability to achieve progressive goals than any centrist ideology. Without institutional expertise to move legislation through the relatively friendly Congress of 2009-2010, the administration was left with little demonstrable “change” to excite supporters going into the 2010 election. The Tea Party wave swept in, and now here we are.

For me, the rhetoric of outsider authenticity seems to have been very much taken up by the Sanders campaign, and that’s a big reason why I prefer Warren. I think that, in a world so heavily mediated as our own, it’s understandable that we yearn for something that feels genuine and sincere. As our media environment has become more context-collapsed, we’ve become ever more aware of the shifting performances of politicians, celebrities and other public figures, which only seems to heighten that desire for the “authentic.” Ultimately, though, I’m not at all sure that that yearning can ever be fulfilled.

Furthermore, sincerity doesn’t seem to be a particularly effective method of persuasion, outside of rather narrow audiences. Expertise and savvy alliance-building, with all the slippery code-switching that might entail, seem more promising, if devilishly hard to pull of in a social-media saturated world.

The Read-Only Hangover

If you’ve encountered me at all (online or off) in the last 5 years or so, you’ll probably have figured out that I’m a little hung-up on the failed utopian promises of what we used to call “read-write” culture. Giving everyone access to the means of information production was going to set everyone free (we confidently predicted in 2004) . Now its 2019 and Trump is president and Nazis are swarming everywhere. What gives?

One of the most informative recent scholarly work investigating these broken promises of online culture is Whitney Phillips’ This is Why We Can’t Have Nice Things, which presents a fascinating critical ethnography of the online culture of “trolling.” The troll, Phillips argues, may be presented as the reason why we can’t have the “nice thing” of a truly inclusive and democratic online culture but the truth is more complicated and implicates a much wider swath of mainstream media culture. Yes, trolls are abusers, but their abuse is formed, motivated, and structured by the larger sensationalist media culture we all exist in.

In particular, Phillips examines how trolls use over-the-top sensationalist hoaxes as an exploit to capture the attention of the larger media apparatus. In one example, they spread false accounts purporting that a (non-existent) drug called “jenkem” made from fermented human feces is becoming popular in American high schools. In response, local news outlets across the country published sincere warnings to parents instructing them to watch their children for signs they had been huffing human waste. In another, trolls submitted a hoax “confession” supposedly authored by a member of an online pedophile ring to Oprah Winfrey. The phony confession was riddled with references to online memes, which Winfrey earnestly read to her audience. As Phillips recounts, Winfrey told her audience that she had been contacted by “a member of a known pedophile network” who claimed that “his group has over 9000 penises and they’re all … raping … children” (a reference to the Dragonball-Z derived ‘Over 9000‘ meme). Oprah’s credulous recitation of mimetic catchphrases was a source of great amusement for the trolls in their den on 4chan.

This second example is, for me, particularly telling. Why go to such lengths simply to trick a well-known talk show host into reciting an obscure in-joke on the air? In part, Phillips suggests that the answer lies in the troll’s desire to take control of the larger media apparatus. This is viral media, not in the sense that it spreads from one exposed victim to the next, but in the sense that it’s a small fragment of information capturing and re-using the cellular machinery of a much larger organism for its own goals.

The desire of trolls to use this viral technique to bend large media outlets to their whims reminds me of Neil Postman’s description of the alienating effects of mass media. For Postman, the mass media of the television age collapsed distance and thus swamped viewers with information about far-away problems they had no meaningful agency to solve. He writes:

Prior to the age of telegraphy, the information-action ratio was sufficiently close so that most people had a sense of being able to control some of the contingencies in their lives. What people knew about had action-value. In the information world created by telegraphy, this sense of potency was lost, precisely because the whole world became the context for news. Everything became everyone’s business. For the first time, we were sent information which answered no question we had asked, and which, in any case, did not permit the right of reply

Read through the lens of Postman, the troll may appear then as culture-jammer, seizing the “right of reply” from the alienation of one-way media. The kind of read-write hero we all yearned for in 2004. But of course, that’s not what trolls are (or at least, not all they are). Trolls, as Phillips reminds us, taunt those grieving the recently dead and spread racist and sexist humor with glee. Trolls are no read-write hero, but a heavily dissociated subculture interested in the manipulation of hapless others for their own amusement.

Part of what might account for this, I’d like to speculate here, is the slippage between mass-media and everyday communicators in the era of social media. This slippage (which Alice Marwick describes in her review of the concept of “micro-celebrity”) encourages us to use the same detached, critical lens we developed for reading the carefully managed presences of mass-media celebrities for interacting with ordinary people in online spaces. For me, this slippage is perhaps best captured in young people’s use of the word “cancelled” to describe someone they have pointedly decided to purposefully ignore/block/mute online. We’re all TV now, and if we don’t like what’s on, we “cancel” it.

In the heyday of utopian read-write culture we hoped to turn Television into genuine communities. Maybe part of what has actually happened is that genuine communities have turned into something like television.

Media, Risk, Regulation, and Utopia

The NY Times has a special section today on privacy. One piece proclaims “My Phone Knows All, and That’s Great“. It’s satire, but satire so dry that in our Poe’s Law dominated age it is destined to be taken as sincere opinion over and over and over again. Indeed, it’s not so different from sincere arguments I hear from students all the time: “I wasn’t using my data, why do I care if Google vacuums it all up for ads? What bad thing is going to happen to me if I get a targeted ad, anyway?”

The honest answer to that question is “probably nothing.” Probably nothing bad will happen to you. It’s important to point out, however, this is also the honest answer to the question “what bad thing will happen if I don’t put my baby in a rear-facing car seat?” Probably nothing. Probably you will go about your day and never have a car accident and the baby will be fine. That’s usually what happens. Almost all the time. Almost.

But of course, that “almost,” that small sliver of a chance that something bad could happen (even though, at the scale of n=1, it probably, won’t) scaled to 300 million people, means thousands of children saved by rear-facing car seats. Thus we regulate, and mandate that manufacturers produce safe car seats, and parents install them. It’s an awkward, imperfect, ungainly system. It’s understandable that, as they spend their 20th awkward minute in the driveway trying to install an ungainly child safety device, many parents may briefly entertain conspiracy theories that the whole system is a profit making ploy on the part of the Gracco and Chicco corporations. Nonetheless, we do it, and it basically works.

In the same way, online surveillance is probably mostly harmless at an individual level. At a systemic level, the harms become more plausible. Some individuals may be uniquely likely to be harmed by ads that trigger traumas or prey off of vulnerabilities (think of the ads targeted at soon-to-be parents, at the sick, at the depressed). At a society-wide level, better slicker ads could further fuel the churn of over-consumption that seems to exacerbate, if not cause, so many social ills.

Of course, we’ve dealt with potential harms of ads before. At the dawn of Television, Groucho Marx would open “You Bet Your Life” with an integrated ad for Old Gold cigarettes. We eventually decided that both tobacco ads and integrated ads were bad ideas, and regulated against them (though the latter is on its way back). TV was still able to use advertising as a business model for funding a fundamentally public good (broadcast, over-the-air TV, which anyone could pick up with an antenna, an idea that seems vaguely scandalous in today’s premium-subscriptions-for-everything world). In the same way, we could put regulatory limits on what advertisers can do with our data, how they can target ads, etc. It wouldn’t kill the business model. Oh, the platforms and the ad folks will scream bloody murder about it, because their margins will suffer, but they will survive.

I, personally, would have preferred a slightly more radical option: call it the BBC model for internet content, where every purchase of an internet-connected device would pay a small fee towards a collective fund to pay internet content providers. Again, this is a reasonable adaptation of public-goods provisioning models from the broadcast age. A flawed mechanism, but one we know to work.

Fifteen years ago, there were serious proposals for such a plan, which would have avoided the era of targeted advertisers (and the surveillance system they have built) entirely. It never really got any traction, though. Instead, there was an idea in the air that the Internet Was Different. That it would be a mistake to try the techniques of the past on this radical, decentralized medium. That, rather than developing a centralized mechanism for collecting and distributing fees as a business model for content creation, it would be better to allow flexible, horizontal, associations of volunteers to build things on their own. These volunteers could build amazing things for free, just look at Wikipedia! Clay Shirky argued, that if we could just harness some of the cognitive surplus being wasted on the “heat sink” of Television we could easily build “100 Wikipedia projects per year.”

But, of course, we didn’t build hundreds or thousands of Wikipedias. We built one Wikipedia, and we built Facebook. In retrospect, the radically decentralized Utopia seems like a cover story for the advertising free-for-all we were actually building.

Writing Instruction in the Age of Mechanical Composition

I’m sorry Dave, this assignment is too important for me to allow you to jeopardize it…

Our students are about to start using Machine-Learning driven semi-automated composition tools, first as illicit tools designed specifically for cheating, and then as a normal part of the everyday composition process. As writing teachers, we need to think about how we’ll respond to this.

First as plagiarism….

What makes me so sure automated composition tools are coming? A recent machine learning system for generating text developed by Open AI, called GPT2, generates uncannily good text output with minimal human intervention.

While the full version of the GPT2 language model has not been released, and presumably requires some fairly serious computer hardware, a simpler version of GPT2, called GPT2-117M is available for anyone to download and play with on github. It took me, a self-trained coder and system administrator of fairly modest skill, about an hour to get the software up and running on a very standard Windows PC. I fed this simpler version of the software the prompt for our First Year Writing program’s standard writing reflection exercise: “What is Your Definition of Good, Effective Writing?” and got output like this:

My definition of good effective writing is that it may be written with as much care as possible for an author’s pleasure, the same quality which applies to a child. I should therefore be wary of making use of this method when it should serve as a source of truth. I must, in fact, like to point out this method and take advantage of its advantages. It will be an error to say that all good works are good works, but in every case there will always be an out-of-hand way where the good works that might exist ought to be, and where the out-of-hand way is a far better choice. To make good work of a great author is the main virtue of all good works, whether it be of children or of poets or theologians, when there can be any good way from it to any good work, although in rare instances, and not only in cases where there may be a special need to find good works, there may be no other way, and there will always be a special need. In short, a good work is simply written without attention, not without effort. Nothing is more difficult to write, as one might suppose, and the whole idea of writing cannot take place without the use of written prose. If any other way were possible I might consider myself quite open to using this method. No more should people say, ‘We are going to write this book by this night! It is good for us!’ No more should anybody say, ‘It’s good for our children! It’s good for us! It should only be done by one who loves his children!’ This method will be adopted as a rule and used wherever possible.

(GPT2 – 117 Output)

The relative simplicity of setting up the available GPT2 tool, and the relatively modest computer required to run the tool, both suggest that creating an “auto-generate your assignment” website will likely crop up in the next few months. Students are likely to use such a resource, much as they already use tools like auto-paraphrasing websites.

Using auto-generated text to cheat on writing assignments is, I would argue, a symptom of larger failures in the way we teach and assess writing. As administrations pack writing classrooms with ever more students, and assign ever more contingent faculty to do the work of first-year writing, the amount of time instructors have to devote to reviewing each writing assignment dwindles. This encourages the use of automated plagiarism-detection tools, like Turnitin, which in turn legitimize the use of automated plagiarism-detection-avoidance tools, like auto-paraphrasers and now, likely, GPT2-based text generators. Students likely think, “If the instructor can’t take the time to read my assignment, why should I take the time to write it?” Machine reading begets machine writing and vice-versa, just as in the now decades-long war of spammer against spam detector.

Legitimate Cyborg Writers and Bullshit Writing Work

While automated writing tools may start out functioning as illicit plagiarism aids, they are likely to spread to the world of legitimate writing tools in short order. In many ways, automated writing is already a part of how we compose. Autocomplete in smart phone messaging apps is the most everyday form of this, and tools like Google’s email auto-response have begun to extend the role of cyborg writing in our everyday lives. It isn’t hard to imagine a new and improved form of Microsoft Word’s infamous “Clippy” tool that would allow writers to compose various genres of text by selecting the desired sort of document, entering some key facts, and then using GPT2 or a similar machine-learning driven text-generation tool to create a draft document the author could then revise (or perhaps tweak by setting further parameters for the tool, “Siri, make this blog post 16% snarkier”).

Such a cyborg writing environment may strike some as unsettling. Surely, critics might say, the process of composition is too important to our identity and sense of self to be automated away like this. I think there is some important truth in this critique, which I’ll elaborate on later, but I also think that the world is awash in what we might call (to paraphrase David Graeber) “Bullshit Writing Work.” Writing done, not because any actual audience wants to read it, but because some requirement somewhere says it simply must be done. Work emails, report summaries, box blurbs, website filler, syllabus mandatory policies, etc, etc. We’ve all at some point written something we knew no one would ever read, just because The Requirement Says There Must be text there. If automated tools can do the bullshit writing work, we should let them. There is no implicit honor in drudgery.

I know that the practice of teaching writing has wrestled for a long time with the problem of bullshit writing assignments, and that many people have done a lot of thinking about how to make student writing feel like something composed with a real purpose and audience in mind, rather than something that simply Must Be Done Because Syllabus. I also know that, in my own experience as a teacher, I often struggle to put these ideas into practice successfully. Too often I find that I try to assign something I mean to be Real Writing (here’s a scenario, now compose a blog post, a grant proposal, a tweet!) that ends up feeling to students like writing they must do for class, and then also pretend that they are doing it for some other reason. Bullshit on bullshit.

I can’t help but wonder if, as we think about the imminent arrival of even-more-automated cyborg writing tools than the ones we already have, we might use this as an opportunity to think about how and why writing matters. In short, as machines begin to take an ever-increasing role in creating the products of writing, I wonder if we could redouble our efforts to help students understand the importance of the process of writing. In particular, I think we need to focus on the value the writing process has in and of itself, and not as a means for creating a written product. In other words, we might:

  1. Explicitly emphasize writing-to-think and writing-to-learn. Writing is a process, first and foremost, of composing the self (a lesson I learned from Alexander Reid’s The Two Virtuals). Even as “compositions” become automated, the process of self-composition remains something we do in and through written symbols, and keeping those symbols close to the self, in plain text rather than in the black boxes of machine-learning algorithms, remains a powerful tool for thought.
  2. Spend even more time working on pre-writing, planning, outlining, note-taking. Often times, students are simply told to do this work, with the expectation that they can figure that out on their own, and that they will really need our help when they get to the drafting and revising stage.
  3. Embrace cyborg text, and allow it into our classrooms. This doesn’t mean we should abandon writing exercises that might help students build hands-on experience with text. Such exercises will help them build important instincts that will continue to serve them well. But it does mean we should consider teaching how to plan for, engage with, and revise the products of machine-assisted writing as it enters the mainstream.

The ultimate effects of semi-automated writing are far more profound than these few pragmatic steps. Still, these are some ways we might adjust our classrooms in the near term, as we continue to wrestle with larger shifts.

Web Peer Production Timeline

A brief timeline of some important events in the history of peer production on the web (sort of, really the larger 21st century web), just so I can keep the chronology straight for myself. I’ve assembled this as part of prep for an article on the history of Wikipedia, so events I think of as connected to Wikipedia’s emergence are privileged.

This is a note-to-self sort of thing. I constructed it idiosyncratically, remembering things that seemed important at the time and snowballing from there. It’s not meant to be exhaustive or representative.

  • Spring 1985: The WELL founded
  • October 1985: Free Software Foundation Formed
  • August 1998: IRC Created
  • February 1989: GNU GPL Version 1 released
  • April 1989: MP3 Patented
  • July 1990: Electronic Frontier Foundation formed
  • January 1991: First Web Servers Available
  • September 1991: First Linux Kernel Available
  • September 1993: Release of NCSA Mosaic Browser / AOL adds USENET (“Endless September”)
  • January 1994: Yahoo! Founded
  • July 1994: Amazon Founded / WIPO Green Paper on IP (DMCA groundwork)
  • September 1994: W3C Formed
  • November 1994: Beta releases of Netscape Available / Geocities Founded as “Beverly Hills Internet”
  • March 1995: Ward Cunningham releases first wiki software
  • April 1995: First Apache Webserver Release (0.6.2)
  • July 1995: Geocities Expands “Cities” Available for Users
  • August 1995: Netscape IPO / Internet Explorer 1.0 released
  • December 1995: Altavista search engine launches
  • February 1996: Communications decency act passes / “Declaration of Independence of Cyberspace” published
  • December 1996: Flash 1.0 released
  • May 1997: Amazon IPO
  • September 1997: Slashdot begins
  • October 1997: Explorer 4.0 (version that will take majority market share from Netscape) released
  • December 1997: RSS Created / Term “Weblog” Coined
  • April 1998: BoingBoing.net at current web address (sources say it began 1995)
  • May 1998: Microsoft anti-trust case (Browser bundling) begins
  • August 1998: Pets.com Founded/Geocities IPO/Blogger launched
  • September 1998: Google Founded
  • November 1998: Netscape releases source code for communicator
  • January 1999: Yahoo! buys Geocities
  • June 1999: Napster service begins
  • November 1999: Code and Other Laws of Cyberspace published
  • December 1999: Lawsuit against Napster begins
  • January 2000: 16 Dot-com companies run superbowl commercials / AOL-Time Warner Merger Announced
  • March 2000: Nupedia goes online
  • March 2000: NASDAQ Peaks and begins to decline / Gnutella released
  • November 2000: Pets.com defunct
  • January 2001: Wikipedia goes online / Creative Commons Launched
  • February 2001: Peak Napster Users
  • July 2001: Napster Shut Down
  • September 2001: Moveable Type Blog Software announced
  • August 2002: “Coase’s Penguin” published
  • March 2003: Friendster goes online
  • May 2003: WordPress released
  • June 2003: First “Flash Mob”
  • August 2003: Myspace Launched
  • February 2004: Flickr launched / Facebook Launched
  • May 2004: Anarchist in the library published
  • October 2004: First Web 2.0 Summit
  • November 2004: Digg Launched
  • February 2005: YouTube Launched
  • June 2005: Reddit Launched
  • March 2006: English wikipedia has 1 million articles
  • April 2006: Wealth of Networks published
  • May 2006: “Digital Maoism” published
  • June 2006: Term “crowdsourcing” coined / Myspace Overtakes Google as most visited site
  • January 2007: Wikipedia’s editor population peaks and begins to decline (largely unacknowleged until 2009 or so)
  • September 2007: English Wikipedia has 2 million articles
  • February 2008: Here Comes Everybody Published
  • April 2008: Facebook overtakes Myspace as most visited social networking site
  • October 2010: Limewire shuts down
  • August 2015: Facebook reports one billion uses in a single day

My thoughts on Media Manipulation and Disinformation Online

I was interested to see this report on Media Manipulation and Disinformation Online circulating through my networks last week, especially once I saw that one of the co-authors was Alice Marwick, whose work on political/emotional economy of social media I’ve found really valuable in the past. I got a chance to sit down and spend some time with it over the course of the last few days, and here are a few quick thoughts about the piece.

I think the report does a really great job of illustrating how the radical right has figured out ways to exploit the mainstream media’s reliance on “spectacle” in order to get their message across. In particular, Marwick and her co-author Rebecca Lewis document how media organizations have been placed in a sort of double-bind that makes them vulnerable to manipulation. On the one hand, they show that media organizations, and especially local media, have faced serious economic challenges because of internet, which has deprived them of subscription and classified ad revenue. These economic challenges have tended to “hollow out” newspapers, forcing them to reduce crucial reporting and fact-checking labor, often at the behest of corporate newspaper-chain owners that have gobbled up vulnerable local papers. At the same time, Marwick and Lewis write that news organizations have felt more and more pressured by a “constant need for novelty to fill up a 24/7/365 news cycle driven by cable networks and social media.” These pressures leave media organizations particularly vulnerable to trolling, while at the same driving sensationalist news coverage that “enables trolls to maintain a quasi-moral argument that, by trolling, they are exposing the hypocrisy, ignorance, and stupidity of the mainstream media.”

I was also particularly interested in how the Marwick and Lewis document the online right’s strategic appropriation of feelings of weakness and exclusion to radicalize young men. Writing about the spasm of misogynist trolling that labelled itself “gamergate,” they explain that participants were often self-identified “geeks” and that this identity seems based on a feelings of powerlessness.

Although many male geeks are privileged in terms of race and gender, the geek identity has long had a reputation for suffering forms of social oppression.21 They may have been bullied by a “Chad” (internet slang for the stereotypical tanned, buff “bro”) or had a difficult time pursuing romantic relationships. They may be unemployed and uneducated. This is reflected in some of the terms they use to describe themselves—as “betas” (non-alpha, weak, compromised, fragile, or pathetic men) or “NEETs” (Not Engaged in Employment or Training, a term that originated in a report by the Social Exclusion Task Force of the UK government). Thus, they are very resistant to discussions of white privilege and male privilege as they do not see themselves as privileged, but as disadvantaged.

The “beta male” label, which focuses on gender and feelings of resentment at being “passed over” in romantic relationships, reinforces an overall sense the report gives that gender, and particularly anxieties about the changing definition of masculinity, are at work in the psychology of the alt-right. The “NEETs” label, which I was not previously aware of, suggests a possible role for economic forces, which the authors stress we don’t yet fully understand.

This sense of powerlessness seems to be both established and weaponized by the trolling technique of purposefully baiting mainstream media into stern condemnations of memes and hoaxes. The report quotes Ryan Milner and Whitney Philips analysis of the “Pepe the frog” affair, which “intended to goad mainstream journalists and politicians into […] panicking over a cartoon frog.” To cite a more recent example, too recent to be included in Marwick and Lewis’s work, trolls seem to have convinced some media outlets that the “OK” symbol has been appropriated as a white supremacist salute, and goaded them into condemning staffers making the “OK” sign in White House photographs. This technique works in trolls favor in two ways, simulatanously allowing them to demonstrate power by tricking major media outlets, while simulatanously allowing them to claim victimhood at their “persecution” by hostile media.

This jujitsu like technique, intentionally courting (over)reaction on the part of opponents, is one I think poses a particularly difficult challenge to those of us opposing the rise of the new radical right. How can we oppose growing neo-fascist movement in ways that do not provide these movements with seeming “evidence” they can use to convince potential new members of their oppressed status? Moreover, how do we deal with the fact that efforts to dissuade these groups from using public spaces to organize may drive them into private alternative spaces where they only become more noxious and radical (as Marwick and Lewis document happening when 4chan attempted to close its doors to the worst of the racist and sexist participants on the /pol messageboard, who preceeded to decamp to the alternative 8chan where, “Hate speech and extremist ideology flourished on 8chan, now uncontested by the more moderate voices that had been present on 4chan.”) These are serious tactical questions I don’t think we have good answers to yet. We need them. Soon.

Artists and Spotify: Another Datapoint

A recent article from TechDirt provides an interesting data point in the discussion of how streaming services like spotify compensate artists. According to the article, a recent study by a French record-industry trade group finds that a large percentage of the revenue generated by these services is paid to record labels, rather than artists. To be precise the study finds that about 45% of the revenue goes to labels, and 6.8% goes to recording artists and 10% goes to songwriters and publishers. About 20% of revenue goes to the services themselves. (Click through to the tech dirt article for charts, and more discussion of the results).

I've been wondering about what Spotify's reportedly meager pay-out for artists means for the future of digital music for awhile. Does the low revenue from streaming services mean that the “music as service” model is, in an of itself, not a sustainable way to pay artists? Or does it mean that artists need to re-negotiate their relationships with labels to capture more of the revenue from streams?

This study seems to provide evidence in favor of the second conclusion, but I'm not sure it's the whole story. For one thing, the article mentions that the streaming services are not yet “anywhere close to profitable.” One wonders then, what the path to profitability for these platforms look like. Do they need to charge more for subscriptions? Will they want to pay less for content? I'm sure they are telling their investors that they will be able to spin user data in gold, but I'm increasingly dubious about the value of consumer data in an economy where consumer purchasing power is, at best, barely growing.

I'm also not sure we can dismiss all label income as rent-extraction. Yes, production expenses aren't what they once were, and distribution expenses must be lower than they were when fleets of trucks brought plastic discs to physical stores, but the “publication is a button now” rhetoric doesn't tell the whole story about the work and expense that might be involved in digital publication. I need to dig into record company financial data to try to round out the story here.

On Punching

What follows is a badly belated, fairly incomplete response to the Charlie Hebdo affair, leading to a slightly less belated but much more incomplete response to the Wikipedia Gamergate scandal.

I know that #jesuischarlie is now roughly 4 or 5 centuries out of date in Internet-years, but I got to thinking about it after an interesting panel on the topic here, which my colleague Mike Lyons (@jmikelyons) participated in. The discussion focused, in part, on the contradictions raised by the Charlie Hebdo case in the classic western legal and political formulation of “free speech.” What does it mean, for example, when the French government imprisons a comic for expressing opinions that could be read as sympathetic with those who killed the Charlie Hebdo staff?

I was struck by how this discussion of the contradictions inherent in our juridical notions of “free speech” pointed to the way the discussion of #jesuischarlie often revolved around an instability in a much less formal, but still important concept. Namely, the popular notion that rude or offensive humor is justified only if it is “punching up” against the powerful, rather than “punching down” against the powerless. As in many other cases, it was not immediately clear to all which way Charlie Hebdo was “punching.” Were they punching down against immigrants and outsiders, or up against powerful state and religious institutions?

This confusion about “punching up” versus “punching down” seems endemic to our discussions about offensive speech. The theory suggests this we should expect this to be the case, after all intersectionality means power is never really distributed along neat vertical lines. It can be much fuzzier, with different subjects simultaneously holding positions of vulnerability and privilege.

I wonder if, perhaps, this suggests we ought to ask less about the direction of punches, and instead try to think carefully and critically about why so many punches are being thrown.

More specifically, I can't help but wonder if, just maybe, the throwing of punches has less to do with an effort to achieve a particular political agenda, and more to do with a performance of a certain sort of masculinity. This is only reinforced by the fact that, of the 21 editors, journalists and cartoonists listed as Charlie Hebdo staff on Wikipedia only 3 are women.

What would it mean for us to acknowledge the possibility that we throw punches either because we identify as men and were taught that this is what that identity means, or because we exist in a culture that privileges male violence and we wish to draw on that power? How would that change our debate about offensive speech? Might it be time to consider punching less?

This predilection for verbal violence might also play a role in the recent conflict surrounding the Wikipedia article documenting the Gamergate phenomenon, which recently made news when the Wikipedia Arbitration Committee moved to ban several anti-Gamergate editors. This decision has been widely, and justly, criticized for removing activists from the project in the name of “neutrality.” I'm still digesting what happened in that case, and I want to write up something more complete about it soon. For now, however, let me just note one of the banned editors comments in defense of himself:

“As Chris Kyle (or the movie version thereof) said, “I'm willing to meet my Creator and answer for every shot that I took.” I entered a contentious topic area to protect living people from abuse at the hands of the identified “pro-Gamergate” editors named in this Arbitration, as well as the hordes of SPAs on the periphery.”

The quote seems to show this editor, at least, understanding his action as distinctly “macho.” He is engaging in righteous combat against bad people. I am very sympathetic to him, as the Gamergate crowd was indeed a nasty bunch with a vile agenda. The Arbitration Committee, for their part, relied on Wikipedia rules that attempt to discourage verbal combat in favor of constructive content building. Wikipedia, they say, is not a battleground.

It is not unfair, I think, to point out that this call for “peace” may tend to favor the status quo. What might it mean, however, for us to consider that calls to combat may, in part, be driven by a desire to project and maintain a certain sort of masculine identity? I'm honestly not sure, but it seems like a question we might need to consider.

Why I didn't go to the MLA Contingency Panel

I saw it in the program, thought, “that looks really important, I should go to that.”

But then I noticed it was scheduled near dinner, and we were supposed to go to that Mexican place everyone was talking about, with the hot chocolate.

And I thought, “well someone else will go.”

“I'm not very important, I won't be missed.”

“Someone will tweet it and there will be a storify and I can catch up.”

So I didn't go.

This is just a note to express my regret.

And to try to shame myself into learning for next time.