“There is an alternative to image recognition that is a lot more respectful of privacy”
On the occasion of our upcoming Kickstart Innovation Bootcamp in October 2021, we had the chance to catch up with Mieke De Ketelaere, Program Director AI at imec and one of the top speakers at our program. What follows is a conversation about AI and the dire need for transparency and communication (between AI systems, between business leaders and tech experts and between companies and their customers about data use) as well as a well-grounded realism that transcends the often overhyped and misunderstood AI trends.
Education should focus on the translation between business and technology
Though Mieke is incredibly passionate about the evolutions in and capabilities of AI, she is also a realist and very clear about what it can and cannot do. “Technology used to be the realm of technologists. But from the moment that business leaders have understood the power of data - which is obviously a good thing - we also often saw projects go wrong because the latter often don’t understand how AI works. That’s why we urgently need ‘translators’ between both sides and why we need education systems that focus on training in the matter.”
When asked about the most common misconceptions, she explains that business leaders often mistake AI for a decision system, while it really is an insight and predictive system, where there is always a fault margin, however small. “That misunderstanding can have big repercussions because once companies implement an AI system, they stop overlooking them. And then the context changes and the program start to deliver the wrong insights because AI is very bad at interpreting the context. Algorithms are never finished, even if they are created by the best data scientists. You need to update them as well as the data they deliver insights on, in a continuous manner. For instance, a police system might say that a certain high-crime city area is dangerous and that people living there have a higher propensity for crime. And then gentrification kicks in, and young people, and artists move in and the area suddenly has a completely different demographic. But the police system will still regard this as a high-crime region, because it has not been updated.”
AI is as biased as the data you feed it
“AI is only as smart or dumb as the data you feed it. That’s why, for instance, I’m not a huge fan of using AI systems in HR to screen CVs. It’s a very subjective environment and if you base the recommendations on the humans who used to do the screening before the AI was brought in, the system will end up creating the same bias as these fallible and possibly prejudiced humans had. But AI can do other fantastic things for HR. Not in the hiring context, but more in the social health context with respect to privacy. You can, for example, install a radar system in your offices or manufacturing halls to detect the stress level of people, in a way that is much less intrusive than with camera’s, and then act upon these insights in ways that are beneficial for the health of your employees.”
Another thing that AI is still very bad at is at communicating and collaborating, because of the lack of industry standards. “Top companies like Tesla can obviously develop very smart systems, but these systems work in isolation. Let me give you an example. There was this accident between a Tesla and a robot because the Tesla was never trained to understand that a robot could cross the roads and the robot was not trained to recognize an autonomous car. Even though both were fully equipped with 5G, image recognition, radars etc., they just didn’t communicate, unable to put themselves in ‘the state of mind’ of the other. If people come to a crossing, they will look each other in the eyes, flicker with their lights or give some form of indication about who goes first. If a BMW car meets a Tesla, they just don't understand each other because the systems are not yet ready to reason for themselves.”
The same CO2 emission as a flight from New York to San Francisco
One of the biggest challenges in the AI industry is the poor energy efficiency of sophisticated systems like image recognition. “Training an big image recognition system on an average data set releases the same amount of CO2 emission as a flight from New York to San Francisco. The complete setup has to be reviewed to become more energy efficient, in fact just like our brain works, without losing any accuracy. The good news is that we are working on that, both from the hardware side and software side. One way to do this via software , is via transfer learning: by training the system based on knowledge that another system has already gathered in a similar context. It works a bit like human learning: for instance, when we know how to play tennis, we can transfer that knowledge and learn to play squash a lot faster. The result is that the amount of energy that you need to train your algorithm with the same accuracy is much lower. Another way - in the domain of domain of Spiking Neural Networks - is to copy how human sight works. Current image recognition systems calculate every pixel while the truth is that 99% of them don't change. So we are now looking into systems that only process the pixels where a change is perceived, just like a human eye works.”
The non-invasive power of radar
One of the evolutions that Mieke is really enthusiastic about is the combination of radar technology with AI, and a lot of that has to do with intrusion and privacy. “Few people know that radar is often a fantastic alternative to image recognition. The latter has always been a lot more costly, intrusive and privacy-poor. But a radar is just as able to tell you how many people are in a room or what they are doing, and in a more affordable and privacy-preserving way. A great case, according to Mieke, is the use of radars in hospitals to check the movements of the patient in the room and the patterns they form, without the need to install an intrusive camera in the (bath)room.. That same radar technology could then also be used by the patient to switch on the television, switch off the lights etc. And finally, Radars can also monitor their aspiration and detect their heartbeat in a contactless way which is perfect for these pandemic times. This domain of preventive monitoring and health has a huge potential.”
“A radar system could also allow my parents to stay longer at home because I could get updates about their stress and anxiety or see if they have fallen or haven’t moved for a long time, in a way that is obviously a lot more privacy preserving compared to image recognition. The seats of the conductors in Chinese fast trains, too, are equipped with this type of radar tech to make sure that they are concentrated at all times. Image and voice recognition get all the hype and visibility, while radar is so often a better alternative. Using radar will bring us another bonus. It will bring us from devices with a GUI (graphical user interface) to a TUI (touchscreen user interface) to a NUI interface (natural user interface) by using gesture recognitions, motion controls and AI (as seen in the newer generation mobile phones).
The individual has to take responsibility over his or her data
When it comes to ethics, Mieke is a big believer in a mix of transparency and customer empowerment. “The line between inspiration and irritation is very thin. While a very targeted digital personalization can feel like a blessing to me, that same approach greatly annoys my husband. The perception of privacy is actually very subjective, both in the offline and in the online world: I may have an open garden while my neighbour could put up very high walls because they want more privacy. In the offline world this need is very transparent to everybody. In the online world, most people have not been given the tools to protect their privacy. That is why I would like companies to offer more transparency in the matter. GDPR is a good start, but it is still too difficult for most to understand, and the actual application is a hassle and offers a lousy user experience. Whatever the case, people have to know what happens with their data, that they are being nudged in a certain way and that they can choose if this happens or not. Companies need to take the responsibility to be very open and transparent about that. But in the end, it’s the individual consumer who has to make the decision.”