Earlier this year, The Stand, a news platform of the University of Wollongong, published an interview with Sharna Wiblen, a lecturer in Management at UOW, and myself on “AI and the future of work”. Among other things, we discussed my research:
In his PhD, Mr Peeters is exploring how our understanding of the mind is influenced by our relationship with technology.
“Smart technologies have not only become ubiquitous, they have also to a large extent become invisible,” he says. “Our relationship with technology has become very intimate in a sense that it’s all around us. It’s gathering data about us, about even our most private and mundane things.”
Our incessant use of technology suggests that we’re okay with that – quick to agree to terms and conditions without reading the fine print when it means we can have news, apps and social media at our fingertips – until artificial intelligence crosses a line that makes us feel uncomfortable, threatened or even compromised.
The relation between mind and computer:
Artificial intelligence can make us feel uncomfortable because it forces us to ask difficult questions of ourselves and reveals truths about our society. But asking those questions also shows how human intelligence differs so greatly from artificial neural networks that mimic our minds.
“You can only copy or create the mind through a computer if you think that there is some deep resemblance going on there,” says Mr Peeters of the dominant theory in philosophy and science that the human mind processes information like a computer.
Ways of improving technology design:
Mr Peeters agrees: “The design of AI algorithms should be more democratic. It should involve the end users as well as people with a certain background in ethics so that they can think through the implications that these technologies will have.
“We must be wary that we now have these addictive algorithms that are designed to push our buttons in ways that we might be vulnerable to.”
And what lessons to learn for the future:
We can also learn from looking at how we engage with technology. There is, for example, concern for how children’s interaction with digital assistants, be it bossy or lazy and dependent, may affect their social or cognitive development.
Mr Peeters suggests considering how technology is influencing our behaviour rather than trying to decide what is right or wrong. “Virtue ethics is an approach that looks at how we can develop a good character, and whether our interactions with technology change the way we behave towards other humans.
“Our use of technology might steer us in a direction where we develop the wrong kind of character states – vices instead of virtues.”
Yet we shouldn’t be afraid of AI, he affirms, because it depends how we choose to use it. “AI allows us to investigate the true capacity of human cognition.”
Rather than solving difficult problems or making our lives ever more efficient, the greatest benefit of artificial intelligence might just be what we learn about ourselves in the process.