Trust me,
I’m a robot

When it comes to putting our faith in artificial
intelligence, how much trust is enough?

Stories about artificial intelligence (AI) generally appear in the news and popular culture because the scenarios they depict are extreme enough to be compelling. Will machines one day become so sophisticated that they conspire to overthrow the human race? Or is the technology so flawed that supposedly ‘intelligent’ systems, such as autonomous vehicles, pose a threat to life and limb through sheer ineptitude?

Both extremes lead to the same conclusion: whether AI is too smart or too stupid – it’s not to be trusted.

However, AI has many potential benefits, which public and user scepticism could prevent society from ever realising. As risky as it may seem to place too much faith in these technological advances, failure to embrace them at all due to unwarranted suspicion could equally work to society’s detriment. So, what is the optimum amount of trust?

Rise of the machines

As is usually the case, the truth sits somewhere between the two extremes. People often seem surprisingly disappointed to hear that AI is unlikely to bring about the downfall of civilised society, but the reality is this: the technology doesn’t currently have what it takes to achieve world domination, and there is little to suggest it will acquire the means, motive or opportunity to make it happen in the foreseeable future. But that doesn’t mean there aren’t legitimate concerns about AI being ‘too smart’. The most widespread fear is that AI could take over human jobs in the Fourth Industrial Revolution, as mechanisation, mass production and automation did in the previous three. Technology, having already declared war on manual labour, is now coming after the white-collar worker. But is this fear justified, and can anything be done to allay it?

Teammates, not adversaries

It is true that, as a result of previous industrial revolutions, certain jobs once performed by humans are now carried out by machines; and yet, overall, there are more people in work now than at any other time since records began. It’s easy to forget that, as technology evolves, so do industries and their demand for skills, creating new and unforeseen opportunities. There are very few industries, if any, in which replacing workers with intelligent machines on a large scale would be hugely beneficial. To use AI simply to replicate human activity would be to seriously underutilise it – the real potential lies in its ability to achieve things that humans can’t.

In 2018, QinetiQ conducted a live exercise in which a computer and a human participant were tasked with reviewing the same 44-page document. The computer completed the assignment and was able to provide a detailed summary of the most pertinent points while the human reader was still working through page one. Far from being a threat, this technology could prove extremely useful in any profession where workers are required to sift large quantities of information, such as law, intelligence analysis, procurement, or investigative journalism. The computer removes the time-consuming, resource-intensive labour, granting the human more time to scrutinise and interpret the findings in ways that only humans can.

However, the value in this technology will only be unlocked with the full trust of those expected to work alongside it. The shift to a new AI-augmented workplace will require significant changes in management styles, corporate cultures and working practices. It will be incumbent upon leaders to maintain up-to-date knowledge of modern AI, understand its capabilities and limitations, and, through training, equip employees with the contemporary skills they need to partner with the technology to the greatest effect.

"Skills and training will adjust to accommodate AI, just like the engineering profession evolved to survive the transition from steam to combustion engines."
Peter Bebbington and Kasia Borowska, co-founders of AI expert network Brainpool.ai, see the technology as an opportunity, not a threat, for the labour market.

Click image for more from Peter and Kasia

Click image for more from Peter and Kasia

AI must work for everyone

If we are to expect users to trust AI, the technology must prove itself worthy. But there have been numerous high-profile incidents recently in which even rudimentary intelligent systems have fallen short.

In August 2017, Twitter user Chukwuemeka Afigbo posted a video of an automated soap dispenser failing to activate in the presence of his dark skin; yet dutifully fulfilling its obligations when presented with a white napkin:

This led the online community to speculate as to whether the dispenser was being racist. Of course, not being sentient, it wasn’t. But AI holds up a mirror to its creators, and this incident sheds damning light on the infiltration of human cognitive bias into testing and evaluation processes. By the nature of the fault, one can suppose that the light sensor, tasked with recognising a human hand and telling the motor to dispense soap, was subjected to repeated tests – on white skin – before being certified as fit for service.   

If those responsible for testing a gadget like this unconsciously assume white to be the ‘default’ skin colour, we should not be surprised if that bias manifests in the machine’s behaviour. Such bias erodes trust. If we’re unable to avoid it in something as simple as a light sensor, imagine the scale of alienation that could be inflicted by something many times more complex.

"[An organisation] will be just as liable for the machine’s actions as it is for those of its human employees."
Chris Eastham, an expert in legal issues relating to AI and robotic process automation at law firm Fieldfisher, examines liability for the discriminatory actions of an intelligent machine.

Click image for more from Chris

Click image for more from Chris

Bringing the user into the development cycle

The solution here is easy to identify, if not to implement. The very first step in any development process, before design and testing, must be to piece together a detailed picture of all those who are likely to use the end product.

The next step is to understand how they will use it. A development team’s ability to define requirements on behalf of its users and anticipate flaws can be weakened by its unconscious biases and blind spots. So, why not ask the users?

Lastly, user involvement should not end when the product is deployed. Not even the most diligent development team can hope to foresee every possible scenario, and that’s ok. A willingness to collaborate with users, listen to their feedback and adapt the product accordingly builds trust and establishes a continuous cycle of improvement that benefits both sides.

Check your sources

Bias in an AI system does not always originate solely from an individual or team, but can equally reflect the prejudices of society as a whole, particularly in the field of machine learning.

Luminoso is a language understanding company that uses AI to extract meaningful data from unstructured text. While experimenting with sentiment analysis, Head of Science Robyn Speer set out to examine what themes the system considered positive and negative in a set of restaurant reviews. When she noticed Mexican restaurants were receiving lower ratings than all the others, she did a little digging:

“When I looked at the cause, it wasn’t anything about the quality that was mentioned in the reviews. It was just the presence of the word ‘Mexican’ that was making the reviews come out with lower sentiment. It’s not that people don’t like Mexican food, but [the system] that has input from the whole Web has heard a lot of people associating the word ‘Mexican’ with negative words like ‘illegal’. That shows up in sentiment analysis.”
Robyn Speer, Luminoso

This example underlines the necessity of maintaining a keen awareness, throughout development, of the material that the system is using as the basis for its judgements. It is vital that every individual involved in the training and testing of an AI system remains alert to anomalies, and is supported by a working culture in which they feel empowered to flag suspicions. A successful development team will have an instinct for spotting results that don’t quite fit and will habitually review the source data to identify the causes.

Prising open the black box

If artificial intelligence is to play a useful role in our society it must be safe, accurate, reliable and free from bias – but these qualities will only generate trust if they are highly visible to those being asked to embrace the technology. The difficulty is that the principles upon which AI systems make their decisions are rarely evident to the user, as the models used to inform outcomes are often so complex as to be beyond human comprehension.      

The opaque nature of AI and machine learning models could become the single most significant source of mistrust. Today, if we are refused a mortgage by a human decision maker, it pays to know why the lender rejected the application. If we feel the decision is unjust we are then able to challenge it. If we accept that it is fair, we can modify our own behaviour to improve the odds of success on our next attempt. In the future, if a machine makes that decision, it is only right that we demand the same degree of transparency.

Among those working on a remedy is the USA’s Defence Advanced Research Projects Agency (DARPA), whose Explainable Artificial Intelligence campaign seeks to promote trust by increasing the transparency of AI decision-making. Program Manager Mr David Gunning describes how:

“Explainable AI – especially explainable machine learning – will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners. New machine learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future. The strategy for achieving that goal is to develop new or modified machine learning techniques that will produce more explainable models. These models will be combined with state-of-the-art human-computer interface techniques capable of translating models into understandable and useful explanation dialogues for the end user.”
Mr David Gunning, DARPA

DARPA’s campaign may be focused on defence, but the principle applies across the board. Information empowers us. In the Fourth Industrial Revolution, machines will make more and more decisions that affect our everyday lives –from processing loan requests, to insurance policies, to job applications. If we are to trust and respect those decisions, the rationale behind them must be displayed in plain sight. 

"It’s not enough to have confidence in the mechanical performance of an intelligent machine. AI demands a trust more like that between humans."  
Professor Paul Cornish, independent analyst and editor of the Oxford Handbook of Cyber Security, asks: will our benchmark for trust need to change if we are to accept AI?

Click image for more from Paul

Click image for more from Paul

The weakest link?

Humanity is fast approaching a tipping point, beyond which the greatest barrier to AI adoption will no longer be the fallibility of the technology, but that of human thinking. The burden is on us to create transparent models that produce fair, explainable outcomes. Only then will the technology truly earn the public’s trust.

Continue the conversation

Interested in the issues raised here? Click below to gain access to our exclusive LinkedIn group, where you can discuss the future of AI with a collection of experts.