Think about a typical trip to the supermarket. After you buy your Greek yoghurt, pre-packaged baby spinach, blueberries and some almond milk for your morning smoothie, you wave your credit card and get on with your day. In that short shopping trip, you have placed your trust in multiple automated farming, packaging, warehousing, and financial systems that helped get your food in your fridge.
Humans place an enormous amount of trust in automated systems. Without the ability to trust complex automated and robotic systems, we could no longer easily survive. Our trust in automated or machine systems forms in the same manner as our trust in humans. Sometimes, trust is developed rapidly. In other cases, it takes time to build trust enough for most people to comfortably rely on the automated system.
Liesl Yearsley, the CEO of Akin, an Australian Artificial Intelligence (AI) company, has argued that we now place so much trust in AI and automated systems that we have relinquished our decision-making power. ‘We’ve seen people form deep relationships with these systems and start to hand more and more decisions over to them,’ Yearsley said. ‘It’s my profound belief that we are moving into a world where we are going to have deeply personal significant others who will be gradually making…decisions for us and we will be handing more life tasks over to them.’
A theory of human trust in machines was first developed in 1994 by Bonnie Muir, who published Trust in automation: Theoretical issues in the study of trust and human intervention in automated systems. The study of humans forming trustworthy bonds with androids has since blossomed.
What makes humans feel comfortable enough to use a machine or system? Muir argued that it was not simply the properties or function of the machine or automated system that led to a person deciding to use a machine. Muir calls trust the ‘intervening variable.’ Trust plays the role of mediator between the automated system and the user. Build trust, and humans will come.
A good example are Self-Service Checkout machines currently used in supermarkets in the US and Australia. These machines have been around since 1992, but took 15 years for most people to feel comfortable defaulting to use them. The machine hasn’t changed much since its invention. What has changed is how people feel about the machine. Trust, not technology, enabled widespread adoption.
How is trust developed between machine and user? Is it just a matter of time, or are there some things we will never trust with machines?
Muir argues that there are three forms of trust that form between people and machines:
Persistence: People trust an established order, either natural a fictional and a social order. One type of natural order is instructional. Machines do what we programme them to do.
Competence: Humans trust technical competence. We trust that the machine will do what it is advertised to do.
Duty: People that machines will carry out their responsibilities and obligations, without problem or protest.
Once these three types of trust have been established in machines, Muir says, then people begin to trust them, and invest faith (**it’s not universally clear how faith differs from trust in this context**). For some robots, faith forms quickly as the system demonstrates its trustworthiness. Depending on the task at hand, people are more reluctant to invest faith in others.
Two polar examples are smartphones, and autonomous cars. The technology to enable autonomous cars has existed for almost 10 years, and trials have repeatedly demonstrated the safety and reliability of the technology. Yet autonomous vehicles are yet to be in use beyond the test phase in any country in the world.
But smartphones, which have been viable for a similar amount of time, are now ubiquitous. In a decade, we have come to trust our smartphones as repositories of financial, legal and even personal information. I even have an app on my phone that tracks my entire reproductive cycle – my doctor of almost twenty years doesn’t know that level of detail.
I place my trust in my iPhone and the app creator without really knowing what the app does with all of that data. Yearsley argues that we have learned to trust the smartphone so quickly because of the extent to which it makes our lives easier.
We have come to trust that the smartphone has a place in the natural order (we control it), that it is competent (it does what is advertised), and fulfils its duty, behaving as it is designed to behave. As a result, we have enough faith in our smartphones to give them information we don’t even give to close family members.
Self-driving cars, on the other hand has yet to establish that it will do what it says it will do in dangerous situations to the extent that we feel truly comfortable accepting its ubiquity.
In part two of this series, we consider how trust in machines will affect the future of work. What does our capacity to trust machines mean for our ability to find jobs in the future?