VERTIGO - VERTIGO - VERTIGO - VERTIGO - VERTIGO - VERTIGO - 
VERTIGO - VERTIGO - VERTIGO - VERTIGO - VERTIGO - VERTIGO - 
Latest Issue

Glitch 2021  •  13 May 2021

Technology’s Implicit Racial Bias: Instagram filters, security and facial recognition

By Katherine Rajwar
Technology’s Implicit Racial Bias: Instagram filters, security and facial recognition

#Here’s a question — why is my Instagram so intent on making me look white?

Filters, to some extent, have become an integral part of the user experience on social media. Since the app’s development in 2010 — yes, you read that right, Instagram is more than a decade old — we’ve become accustomed to enhancing our photography through the lenses of Gingham, Moon, Juno and Lark. In 2016, the launch of Instagram stories changed the game entirely. At first, flower crowns and puppy ears seemed like a fun enhancement to ye olde mundane selfie, but the inclusion of so-called 'beauty' filters point to something far more sinister lurking beneath the surface. 

Augmented reality filters have already received much criticism for perpetuating unattainable beauty standards. I can take a photo right now and transform my real face into a pore-less, blemish-free, baby-faced beauty. There are several accounts of dysphoria around these filters, suggesting that there’s an inherent disconnect between the user’s physical appearance and the version of their face which they have manipulated using the app. What’s more, several plastic surgeons have encountered clients who request that their filtered face be used as a reference point for cosmetic surgery procedures.

There are, of course, inherent arguments which can be made in favour of these filters — namely the confidence boost which they can provide users with. Particularly with regards to influencer led campaigns, using a filter to appear more professional and put together is no different to a television show host donning a face of makeup to appear on screen.
The effects of these filters can seem slightly damaging, yet ultimately harmless, especially when your beautified face is accompanied with cat paws and a funny voice. This seems to be the case, until we acknowledge that

"Many of these 'enhancements' offered by beauty filters are glaringly in favour of Eurocentric beauty standards."


Right, let me explain. As I write this, I have the front camera open on my phone, and before me sits my face, via Instagram. Using the browse filter option, I’ve searched broadly for 'beauty.' There are a few commonalities offered in all the filters I can choose from. The first is that my skin, while smoother, is significantly lighter. Secondly, my bone structure has changed dramatically. I have cheekbones that could cut glass. My eyes appear far lighter, and perhaps more alarmingly, my quintessential south Asian nose, inherited from my Indian grandfather, is gone. In its place sits a high-tipped, tiny, Kylie Jenner-esque nose, staring me in the face. I hate to admit this, but then returning to an unfiltered selfie is somewhat alarming. My own flaws glare back at me — my brown skin, my blemishes, my nose, my eyes (now a lot less opened).

But it’s just an illusion, right? Just a bit of fun?

"There’s a blatant issue here, one in which Instagram regards certain features as 'flaws' and others as 'enhancements.'​"

This begs the question: can technology be racist? Oh yes, my friend, yes it can. Allow me to elaborate. Instagram is not the only facet of technology in which facial recognition technology gets a little murky. Apple’s very own Photos application provides a feature in which it groups photos of the same person together. My very own phone has grouped photos of me and another South Asian friend, as being the same person.


A quick Google and there’s a plethora of accounts of similar incidents. A study done at Massachusetts Institute of Technology (MIT), which measured how facial recognition technology worked on people of different races and gender, found “three leading software systems correctly identified white men 99% of the time. But the darker the skin, the more often the technology failed.”2

As if this wasn’t enough, a shallow dive into Reddit retrieves multiple accounts of 'My iPhone Face ID thinks I’m my sister.'

These responses are overwhelmingly from people of colour.

Worried yet? Allow me to take this one step further. What happens when we consider the implication of racially biased software in facial recognition technology used by our government?

The Federal Government introduced a new piece of legislation in 2019, known as the Australian Passports Amendment ('Identity-matching' Services) Bill. The bill proposes an increased use of facial recognition technology, particularly with border security — so that’s airports, seaports, etc. The statute was put forth on the basis that it would “prevent crime, support law enforcement, uphold national security, promote road safety, enhance community safety and improve service delivery.”
While the use of facial recognition technology seems somewhat unavoidable (there’s talk of it being built further into our security systems) the possibility of mistaken identity in the case of racial minorities is alarming, to say the least. The mass incarceration and death of Indigenous people in Australia, and the events from the Black Lives Matter movement last year, sparked by the murder of George Floyd at the hands of police, indeed highlights the implicit bias which people of colour already face in the eyes of the law. But when the law is informed by technology, with the potential to commit such errors, things get worrying indeed.

So, the question remains — what now? It seems as though facial recognition technology is so embedded within our lives that it’s somewhat inescapable. Perhaps, a benchmark which mandates the standard to which this technology operates is necessary. Such developments must be designed with all individuals in mind — considering race, ability and gender. Ultimately, access, and more so security, should be an option for everyone.


#References:

1. Snapchat photo filters linked to rise in cosmetic surgery requests

https://tinyurl.com/3zm6jb8f

2. Study finds gender and skin-type bias in commercial artificial-intelligence systems

https://tinyurl.com/93t6xwnj

VERTIGO - VERTIGO - VERTIGO - VERTIGO - VERTIGO - VERTIGO - 
VERTIGO - VERTIGO - VERTIGO - VERTIGO - VERTIGO - VERTIGO - 

© 2024 UTS Vertigo. Built by bigfish.tv