New Media and Algorithmic Censorship

Media Sociology discussions about censorship quite understandably usually focus on various forms of human agency – whether this relates to areas like:

  • overt forms of government censorship, where media outlets are directly controlled or censored by government (such as the ex-Soviet Union or contemporary Iran).
  • overt and covert forms of propriatorial censorship. This might media owners deliberately failing to publish views with which they disagree (the Daily Mail, for example, rarely publishes opinion pieces favourable to political parties it doesn’t support). Alternatively, censorship may take-on a slightly more cultural turn: “Fox News blurred out the breasts on a £115million Picasso painting for some reason
  • self-censorship whereby media employees do not research and cover stories they know will not be printed or published by their employer – an idea that reflects an understanding of contemporary news values.
  • While these forms of censorship are important and interesting a slightly different dimension that might be worth introducing into the mix is that of algorithmic censorship, for two main reasons:

    1. It’s an aspect of New Media that clearly demarcates it from Old Media, in the sense the former introduces a new, more widespread and more-pervasive form of censorship.

    2. It involves the automation of censorship that gives the appearance of non-censorship. To understand how this works in terms of the various forms of censorship that develop when software is programmed to pick-up or demote different types of news story on sites such as Facebook, this article should help to illustrate this idea.

    Alternatively, this article on algorithmic filtering, of the type consistently and repeatedly carried-out by social media sites like Facebook and YouTube, adds an interesting Gramscian twist to the analysis your students might find illuminating.

    Stay Updated

    Enter your email to be notified when we post something new:

    Archived Posts