How to build useful AI in a complex and uncertain world

Over the past decade, AI technology has truly progressed by leaps and bounds and has become an important part of our daily lives in a knowledge-based economy.

calendar icon
January 5, 2023
Technology

On the one hand, the scientific insights provided by AI continue to grow enormously, and at an accelerated pace. But on the other hand, we are confronted with an increased number of unanswered challenges and misunderstandings: privacy concerns, ethical risks, inefficient energy consumption within our current AI solutions, etc.

What causes this tension? Is it the result of a Babylonian confusion of tongues, a deep misunderstanding between conversation partners who are simply too different in nature? A confusion, which causes people to work on AI solutions, side by side, but in parallel silos. Or is there more to it?

Let’s go back in time for a bit. According to biblical mythology, the people of Babylon were building a tower in order to reach the heavens, just like God. But he punished their pride with a great confusion of tongues so that they no longer understood each other and were no longer able to collaborate. The deep tech industry today bears a similar pride. They may not want to reach heaven, but they dream of achieving Artificial General Intelligence (AGI). AGI is the hypothetical intelligence of a machine that can understand or learn any human intellectual task. It is a primary goal of AI research and a common topic in science fiction and futures studies. In that way, technology replaced religion and the engineer God.

But after centuries of technological advances and the almost exponential increase in computing resources, data, knowledge and capabilities, we still haven't achieved our vision of AGI. In fact, we're not even close. We have devices that we can talk to that don't really understand what we're saying. Machines can detect images, but don't really understand what they are. And we have fantastic machines that can beat world champions at specialized games, but aren’t even able to answer some banal questions? In Babel they may have had a plan for what the tower should look like, but we don't actually agree about what that AGI should look like.

We do not have a plan to achieve that dream

Could the hubris of a few tech companies in that area cause a societal rift? Is it the cause of the current tension around the theme?

Not a technology, but a purpose

We tend to think of AI as a technology, rather than a journey to a higher end. But perhaps we should take a different approach. All the technologies we've developed along the way in our quest are useful in their own right, of course, but - adding them all together - we haven't reached our ultimate goal yet. If AI is considered a collection of technologies, then you can spend all day arguing about what AI is, and what isn't. Are software robots AI? Are self-driving cars AI? Is computer vision AI? Is character recognition AI? If you think of it as technology, it will always be subject to disagreement and interpretation. However, if you think of it as a purpose or even a quest, then it will be something we're always striving for, even if we're not quite there yet.

That's why it's important to understand that AI is no longer just a technology today. It’s a lot like the Space Race, in fact. That was not a technology either. Indeed, many great technological developments originate from our quest to conquer space, but the true goal was putting humans in space. The developments triggered by the search for that purpose have been able to help society. What comes out of the race is all of these separate technologies that - all of them together as a whole - achieve that goal together.

Creating intelligent machines is the goal of AI. And at the same time, it is the underlying science behind understanding what it takes to make a machine intelligent. AI represents our desired outcome. And many of the developments towards that concept, such as self-driving vehicles, image recognition technology or natural language processing and generation, are separate steps towards AGI. That is already where part of the confusion (of tongues) lies.

The current mismatch between what developers or researchers create, what customers buy, and what governments want, has increased due to the hidden nature of the technology and the current siloed development. If we consider AI more from a perspective of a dream or a higher goal in our interdisciplinary discussions, this is the question we need to ask ourselves: which transformation and which actions are needed to build transparent and secure AI solutions in a world where AI is becoming more complex and emerging in more and more domains.

A collaborative and multi-disciplinary setup

Today, there are a number of players worldwide who act on the principle of technocracy, placing all ethical decisions in the hands of engineers. Others hold a separatist view: they claim that technology is not neutral, so that any mistakes of the systems they develop cannot be their own responsibility. Still others opt for the “AI for Good” method. Just think of the use of AI in the search for new medicines, which tends to be the flagship example of that approach. However, research is not immune to abuse either. Researchers recently showed that AI models designed for therapeutic use can be easily reused to generate biochemical weapons.

I believe the only right way forward is a collaborative and multi-disciplinary setup during the design phase of our future AI systems. We need to look - together - for the benefits of machines that can think and act like humans. We need to ask ourselves which opportunities AI offers to dramatically increase efficiency, reduce costs, increase customer satisfaction, improve existing products and services, and create new business opportunities. Because ultimately, an organization is not about its technology. It’s about a general mission and objective. Just like those organizations, AI is not defined by technology, but by the overall purpose.

It is clear that the design phase of these future AI solutions will look different in our knowledge economy. The Quintuple Helix innovation model could prove to be extremely useful here. The Quintuple Helix model strengthens the multidisciplinary collaboration between the 5 main AI stakeholders: research teams, companies, governments, citizens and ... our planet.

This integrated approach focuses on the necessary transition needed to address potential privacy concerns, ethical risks and obscure obligations as well as the inefficient energy consumption within our current AI solutions.

Human-AI translators

However, the historical challenge in this setup is that all 5 parties involved speak a different language, hugely increasing the possibility of a Babylonian confusion of tongues. Research speaks in formulas, business speaks in KPIs, citizens speak the language of their human rights, the government speaks the jargon of rules and our planet speaks about carbon dioxide removal and material consumption.

So we need human-AI translators to demystify the current confusion. They are first and foremost generalists, with enough knowledge of each domain. On top of that, they must use their empathic and social skills to conduct the multi-disciplinary debate between all parties involved to ensure that our AI systems are fully are in line with the sustainable development goals. And they need to do that at the start of the project, in the design phase. As we are building our new and powerful AI “towers”, it is these AI translators who will ensure the stability of their construction.

WRITTEN BY
Mieke De Ketelaere
Mieke De Ketelaere
See author page
Join us on our next experience
Finished
The (next) era of AI Tour
calendar icon
23.04 -> 26.04 - 2023
London
Get front row access to the latest scoop and new upcoming experiences, bundled into a monthly newsletter
You may opt-out any time. 
Read the .
Subscribe
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
calendar icon
January 5, 2023
Technology