UPDATE

AS OF JANUARY 1, 2013 - POSTING ON THIS BLOG WILL NO LONGER BE 'DAILY'. SWITCHING TO 'OCCASIONAL' POSTING.

Tuesday, November 20, 2012

Hate & the Internet


Does the internet encourage insidious and bullying behaviour?



I remember the first time I logged into a chatroom. It was 1996, and I was using my mum's AOL account to mooch around the world wide web, which was still very much in its infancy. I was in that glorious, unrestricted period of life between college and reality, and the web seemed to offer splendid, unrestricted access to the outside world in a way that no generation had known before.

So it was with cocky confidence that I joined the "general" room as "Dan" (of undisclosed gender) and instantly discovered the thrill of anonymity. Behind my digital mask, I began a brief but satisfying tirade of mockery, contrariness and antisocial behaviour. Of course, compared with the stream of epithets that Xbox Live users encounter playing online, my efforts were pretty tame – I didn't question anyone's sexuality, make any racial slurs or say anything particularly negative about anyone's mother. But the sense of release I experienced in 10 minutes of childishness has remained at the back of my mind ever since I started studying the web; it helps define our behaviour online.

For some, this new technology not only facilitates, but actively encourages insidious and novel social ills. Blogs and forums are no-go zones for people who hope for rational conversation; cyberbullying has been blamed for several recent suicides; and white power, homophobic and jihadist organisations have colonised the web, preferring its potential to old-fashioned pamphleteering. It looks as if the web makes it possible for us to hate one another more easily, more efficiently and more effectively.

My mantra is that the web is an agnostic communication platform: it can do nothing to us except reflect who we are. However, as my own little descent into cyber-trollism attests, there are aspects of it that do encourage antisocial behaviour.

The biggie is anonymity, according to Dr Karen Douglas from the University of Kent, who studies the psychology of hatred online. We can log into a forum under a pseudonym, lob a hate bomb and then fade away into the digital ether. It's like playing a trick on Halloween; it's childish, it seems insignificant, and it's kinda fun. Unfortunately, such actions can have real-life consequences depending on who the hatred is directed at, how often it happens and whether there's support in place if the victim needs it.
But is anonymity alone the issue? Philip Zimbardo, professor emeritus at Stanford University, has been studying why people do evil since the 60s, and he says that environmental social cues are equally as important. In his famous Stanford prison experiment in 1971, a random selection of psychologically stable subjects were transformed into brutal prison guards after being given mirrored sunglasses and uniforms and told to play the role.

To reindividuate anonymous members of online crowds, forums, blogs and news sites – including the Observer's – are increasingly asking commentators to register their real names before posting any material (even if they then do so using a pseudonym). It's believed that the forging of this simple link between the virtual and offline persona is why relatively few counter-normative attitudes are expressed on sites such as Facebook, where exposing yourself as racist can turn you into a social pariah. Unless, of course, your friends are racists too. And that's a more difficult problem to solve.

Data traffic indicates that, online, we are increasingly talking to people just like ourselves, relying on our friends' directions to navigate the web. It's ironic that, rather than opening us up to an ever-greater number of opinions and attitudes, social networking sites such as Facebook and Twitter may actually be narrowing our worldview, confirming what we already believe and reinforcing attitudes we hold already.

So what happens when we only communicate with people like ourselves, and the messages we share only reinforce our mutual hatred? It's a technique radical religious and racist organisations have always used to make sure their members conform, but now they're employing technological tools to create global communities of like-minded ideologues.

Groups such as Stormfront.org and GodHatesFags.com use the web for networking, self-promotion and recruitment. They give support and intellectual ammunition to existing members, rarely explicitly inciting violence. Thankfully, it appears that efforts to convince non-believers to convert to their cause are rarely successful – although we have yet to see the impact of their children's zones (with links to games, and alternative information for schoolwork, that reinforce their ideologies).

It's not all bad news, however. Just as the web is a powerful tool to get the message out, it's also a good vehicle to expose its flaws. The rampant opinions that dominate online life challenge users to be critical of the content they consume, and considerate in how they construct effective counter-arguments.

Online hatred is real, and it can have a very real effect. But we are in command of the technology; it's not in charge of us. And as for anonymity, back in 1996, even though I hid behind a false name, I didn't throw a hate bomb into that chatroom and run away; no, I was booted out. And frankly, my moment of humiliation was exactly what I deserved.

No comments: