We Need A Moon-Landing Scale Response To Solve Tech's Crisis Of Legitimacy

50 years ago, we saw what humanity was capable of. And we can do that again, but before we do, we need to ensure AI is for the good of society, Ivana Bartoletti writes.
Handout . / Reuters

50 years ago, humans landed on the moon for the first time – showing what humanity was capable of. Nowadays, not a day passes by without news about yet another development in artificial intelligence enabling us to live longer, better and much more easily than we do now. With the support of AI machines, we can now detect cancer and Alzheimer’s much sooner, identify abnormalities in health scans and spot cyber threats before they materialise. All this is undoubtedly good and exciting.

“Scandals like the one involving Cambridge Analytica have given a glimpse into the power of data.”

However, it is fair to say that tech is going through a crisis of legitimacy. Many of us are realising that there is something inherently wrong when our personal data is broadcast with little accountability and transparency. Scandals like the one involving Cambridge Analytica have given a glimpse into the power of data, of predictive analytic software able to analyse us and use algorithms (similar to what gambling platforms use) to keep us addicted for as long as possible to social media platforms, so that our data can be exploited for the benefit of advertisers.

This crisis of legitimacy has long been in the making – and tools like the Data Protection Regulation have helped direct attention to how tech intersects with freedom, privacy, autonomy and politics.

Artificial Intelligence is part of all this and that is because algorithms need data to be trained. The availability of large data sets, collected from us when we get on public transport, shop, browse online or through our smart meters – is perhaps one of the reasons why AI (which is not new, at all, just think of Alan Turing) is back in fashion and progressing so rapidly.

This availability of ingredients means a lot of data can be ingested into algorithms for them to identify patterns and perform (at least for now) basic and repetitive tasks better and faster than we humans have managed so far. For example, in medicine, the potential of this is extraordinary as a doctor’s intellect and training can be augmented by a machine in detecting a disease.

“But a question does remain: what is human and ethical cannot be defined by the same chiefs who are running the digital show right now. Women must be at the heart of it, and this is why the future of AI is the future for feminism, too.”

AI however has already shown its dangers, and we have seen how it can be a tool of repression, social control, discrimination and manipulation. Facial recognition is a key example, as being watched or thinking we may be watched changes our behaviour and our relationship with the public spaces we inhabit.

Already, algorithms can make ‘decisions’ about many things. They can be trained to sift through thousands of people and decide how likely we are to reoffend, if we should receive housing benefit or a loan, whether or not we are called for a job interview. This is where many problems arise. Sadly, there is no shortage of headlines highlighting stories of failed machine learning systems that amplify sexist hiring practices, racist criminal justice procedures, predatory advertising, and the spread of false information. Amazon had to scrap a secret recruitment tool that was picking only male CVs. Facial recognition techniques work for white men, with accuracy as high as 99 per cent. But when it comes to women – and black women in particular – it can drop to as low as 35 per cent, meaning there is a very real threat of wrongful arrest, prosecution and punishment based on false data. COMPAS, the US software used to predict reoffending rates was demonstrated to be skewed against BAME women, giving a much higher rate regardless of the crime committed. Moreover, automated decisions mean that software can lock people out of essential services, leaving the burden on the most vulnerable to challenge and appeal. Cathy O’Neil talks of automation of inequality. As software is gobbling up a lot of tasks humans used to do we are running the risk of coding our societal inequalities, such as misogyny and racism, into our systems.

“Without careful consideration and the willingness to identify where bias may arise, AI will only perpetuate and amplify the stereotypes and prejudices we have in society right now.”

Bias in algorithms happens for many reasons – including the fact that AI uses historic data and historic data reflects the bias inherent in society. Without careful consideration and the willingness to identify where bias (often unconsciously) may arise, AI will only perpetuate and amplify the stereotypes and prejudices we have in society right now.

A diverse workforce is essential, from coding the algorithms to the wider management of companies.

This is why we need to intervene, and quickly.

50 years ago, humans landed on the moon for the first time – showing what humanity was capable of. We now have the chance to demonstrate once again that humans can make the most of technology to reach new heights. To do so, we need to ensure AI is for the good of society. The good thing is that we talk a lot about ethics, and people are coming to terms with the dangers as well the opportunities. But relying on voluntary “ethics” without strong regulation is a risk we cannot take, and I am encouraged that the new President of the European Commission Ursula von der Leyen has included legislation on the human and ethical implications of artificial intelligence in her (long) list of goals.

But a question does remain: what is human and ethical cannot be defined by the same chiefs who are running the digital show right now. Women must be at the heart of it, and this is why the future of AI is the future for feminism, too.

Ivana Bartoletti is the co-founder of Women in AI and head of data and privacy at Gemserv.

Close

What's Hot