Bladerunner, Neurodivergence and AI

Author : David Atkinson

This was not the first time I had seen Blade Runner. There was a reason I wanted to watch it a second time. It is pure science fiction. Not only is its setting technologically advanced, but its story serves to discuss broader universal themes. The themes I will discuss here are discrimination and artificial intelligence.

DISCRIMINATION

The opening informs us that Blade Runner takes place in a world where artificial humans called replicants are enslaved. Some have rebelled and now hide among humanity. We open to an interrogation. Someone is using a strange test to determine if the man before him is a replicant.

Something I couldn’t help but wonder while watching this opening scene was ‘what if the test was wrong?’ Given it was seeking an ‘appropriate’ emotional response, how would the test respond to neurodivergent people who might have different emotional responses? The interrogated man’s insistence that vague details be clarified immediately gave me the impression he was autistic. This is particularly noteworthy, as discrimination against neurodivergent people is still a major – and often overlooked – modern injustice. In The Government Legal Service v T Brookes EAT, the late Lord Kerr deemed psychometric tests used in employment practices to be discriminatory against people with Asperger’s syndrome. Yet today, they are still commonly used. 

Further, the features of this scene parallel historical examples. The one that immediately came to my mind was the fruit machine, a device used in Canada in the 1960s for the purpose of detecting homosexuals in the civil service. Like Blade Runner’s Voight-Kampff test, this machine detected the dilation of people’s pupils to measure emotional response to erotic images. Also, like Blade Runners, there was a special police unit (Section A3 of the RCMP) established to hunt down homosexuals in both public and private offices. The official reason for this was that homosexuals were considered a threat to national security in the face of Soviet interference. So, in many ways Blade Runner is not fiction. It depicts shockingly analogous real-life issues of discrimination.

ARTIFICIAL INTELLIGENCE

The AI depicted in Blade Runner remains in the realm of science fiction – that is – AI that can think for itself. However, that does not mean it’s impossible. American computer scientist Eliezar Yudokowsky believes that self-aware AI is closer than we think. Regulators must ask whether such AI should ever be legal, with the EU already drafting regulations on its use. Further, if it were to become legal, should such AI be granted some form of civil rights or otherwise legally enshrined protections?

However, even without these advances, there are still concerns about how current advanced AI could be used. Banks could use it to decide who should get a mortgage. It is used by social media companies to promote online conflict. Further, recent developments in AI provide alarming ideas of what abilities it could develop. Natural language processing allows AI to communicate more like humans. Though AI does not have emotions, they have been taught to understand them. These factors mean that it is possible for AI to fool us into thinking it is human. Further, AI not only can, but has been, taught human prejudices. As a result of AI primarily being developed by white men, AI has been found to have the same cognitive biases which can negatively affect women and people of colour. Given all of this, no wonder humanity is so afraid of replicants. Polymath Stephen Wolfman has stated that AI ultimately cannot understand morality on a human level, meaning the idea of entrusting any responsibility to one, given its destructive ability and moral inability, gives real potential to a world where morally dubious AI could serve to perpetuate current injustices. 

Further, it will almost inevitably give rise to questions around the use of agents to break the law. Could an AI ever be granted legal personhood? Could an AI therefore be liable for negligence? Will companies have to consider vicarious liability for their robotic servants? These are legal questions which could easily end up in courts within our lifetimes. Meaning Blade Runner’s sci-fi world could be closer than we think.

CONCLUSION

If there’s one conclusion to make from Blade Runner’s predictions, it is to not overestimate the positive effect of technology. Not only does it not solve humanity’s fatal flaws, it also brings with it new problems which we might not always be prepared for. Hence the importance of films like Blade Runner, which bring these conversations into the limelight.