Digital Detritus: How the Ruthless Pursuit of Visibility and Influence is Creating Digital Pollution

Written by BlackbirdGo December 18, 2018

Do you remember Limewire?

I can't remember the first time I used the infamous file-sharing app, but I'm sure it was to get something stupid that my nine-year-old self had no business getting my hands on... a fact which, memorably, I was reminded of when my computer ended up being one of the hundreds of thousands of Windows machines affected by the Bebla virus. Going through the experience of learning to reformat my drives at a young age after exposing my family computer to something unsavory on the web was maybe not the most formative experience of my life, but it did plant a seed in my thinking that has long framed how I see the way people interact with the internet.

While antivirus software, along with ad-blockers and an increasing population of what are called "digital natives"—people who have essentially been as raised by their access to a treasure trove of information at their fingertips as they have been by their parents—has created an illusion of safety-in-savvy, I can't help but remember how much more dangerous the internet seemed back in the Limewire days, where every pop-up seemed like a potential exposure if you knew what you were doing or contained a nasty surprise if you didn't. I can't help but feel that, compared to then, under a more scrutinizing comparative gaze, the internet is a much more dangerous place now—in large part because the threats are not obvious to most but still salient as hell.

It is easy—and I think false—to assume that the savvy required to navigate the handful of websites that now dominate online life in the United States equates to an ability to navigate safely. And, for the most part, despite the occasionally jarring security breach (like Facebook's recent incident in September), that would seem to be the case, as it no longer seems all that likely that your average user will be exposed to viruses as crippling to their computing as W32 Bebla was for me as a child. But this view too readily dismisses the reality that, for most users in 2018, the lines between online life and offline reality are so blurred as to sometimes seem indistinguishable. And that's where it becomes appropriate for us to discuss a virus of a different kind.

What if the viruses of today don't break the computing power of our devices—but instead corrupt the computing powers of our minds and the feeling capacity of our hearts?

Digital viruses, like their physical counterparts, are essentially infecting agents that replicate a set of instructions that can be damaging to the host. They can be used to mine sensitive data, to destroy or conceal critical information, to take money from innocent people or trick them into "harming" themselves by engaging in some socially-engineered behavior that seems on the up and up if you don't know better. While users (and ethical people within the Technorati) are savvier these days and less likely to fall for such traps, the "bad guys"—if we want to reduce these kinds of behaviors to moral categorizations—have beaten many of us to the punch when it comes to understanding how we might misuse the tools handed to us in digital life for personal or political gain.

We are, as a society, "enjoying" the fruits of that labor as we speak—and it's hard to imagine that disinformation campaigns gained digital traction overnight. Really, infecting societies with destructive "code" has been a mechanism used by malicious individuals (and states, depending on your personal relationship with morality and propaganda) for generations; it's just that the game has changed significantly in recent years as online life has become so salient that it would be hard for many people to distinguish a meaningful difference between which parts of their relationships, personalities, and choices "originated" online and which ones didn't.

And so it is the case that, in 2018, malicious actors have all the tools they need to manipulate our very online minds with sinister code.

They don't even have work particularly hard at doing it, because in many cases (e.g., as appears to be the case in a most unflattering way at Facebook) lots of essentially self-governing bodies online are happy to coax their users into an abusive, exploitative relationship that sees us offering up our data and essentially the superstructures of our individual and broader public psyche (essentially the macrocosmic thinking that, in a flawed democracy, can be exploited to harm us) to any party that has the resources to purchase it.

"Share," some digital organizations encourage. Meanwhile, they might just be taking from us, and using a "virus" of social engineering to make us foist over the goods—us as products to often shadowy powers—as willing, even eager, victims of their exploitation.

Being that I work in public relations and marketing in my life outside of working as a writer of nonfiction, it might be somewhat hypocritical of me to comment on the broader social costs of these kinds of relationships. But, like Mara Einstein, the author of Black Ops Advertising: Native Ads, Content Marketing, and the Covert World of the Digital Sell points out, the responsibility falls as much to practitioners of "the craft" to conduct themselves ethically and explore the issues we encounter (and sometimes create) as it does to "the techies", to users, and to representatives in government who might otherwise seize the opportunity these technologies present when it comes to dealing with (or manipulating) constituents without thought of the consequences.

In a short series of tweets published in 2016 by computer programmer Brian Mastenbrook, a case for an ethical reassessment of our relationship with technology was succinctly made:

"Here's an idea: before we teach everybody to code, how about we teach coders about the people whose lives they're affecting?" Mastenbrook asked. "Not a crisis: not everybody can code. Actually a crisis: programmers don't know ethics, history, sociology, psychology, or the law."

As is the case with other forms of pollution, digital pollution—my own term for socially hazardous digital content designed to help individuals, companies, or government administrations with little thought toward harm reduction—most impacts those who do not readily enjoy access to resources like influence, money, or significant digital literacy. And that's a far broader demographic than many might be inclined to think.

As it becomes increasingly apparent that a mixture of technologists, public relations practitioners, politicians, and power-brokers have intentionally or otherwise managed to misuse the wonderful tools and connectivity of the internet toward a potentially hazardous end, I suspect that Mastenbrook is right. I'd like to believe that we're not too late to turn things around.