Artificial Intelligence: the 5 most common misconceptions
The field of artificial intelligence (AI) is an exciting and rapidly evolving one that is already having a significant impact on various aspects of our everyday (working) lives and will transform them even more in the future. Despite this (or perhaps because of it), a number of misconceptions about AI remain widespread. Dr. Sebastian Schönnenbeck shares five of them that he most frequently encounters in his day-to-day work as a consultant.
Misconception 1: AI is the same as robotics
While AI and robotics are related, they are not the same thing. AI relates to computer programs and algorithms that are designed to mimic partial aspects of human intelligence. In contrast, robots and androids are physical machines that perform (repetitive) tasks. And although many robots are equipped with AI, not all AI systems are embedded in robots. On the contrary, the overwhelming majority of all existing AI systems do not operate a physical object, but work in a purely digital manner.
Another term that comes up in this context and contributes to misconceptions is Robotic Process Automation (RPA). Although “robotic” features in the name here, it is not a physical robot that is being referred to, but a software technology that automates processes such as the transfer of data from one system to another. Again, no AI components are included for the time being, but RPA can be augmented with them.
Misconception 2: AI makes purely objective decisions
Since a computer in itself has no feelings or prejudices, the impression is often created that a decision taken by an artificial intelligence is entirely objective and unprejudiced. However, modern AI systems are usually trained with so-called machine learning methods, in which the system is presented with a large number of decisions made in the past (along with the associated result) so that it can learn to make these decisions in the future itself. In other words, all the biases that underpinned these decisions are subsequently reflected in the AI. This is why special care is required, especially for AI systems whose decisions have a direct impact on humans.
The guidelines formulated by the EU for trustworthy AI provide assistance in this regard. It includes seven core requirements that should be considered when using AI, such as transparency, non-discrimination, and data quality management.
Misconception 3: AI will always perform tasks better than a human being
There has been tremendous progress in the field of artificial intelligence in recent years. There are a number of tasks that AI systems can now perform at the level of human experts – usually at high speed and without getting tired. That said, there are still plenty of tasks where even the most advanced AI systems fall well short of humans. This is especially the case when diverse information from multiple sources needs to be aggregated or when there is a need to interpret human behavior. But that doesn’t mean AI is useless in these areas. Instead, it often makes sense to use AI systems to assist human experts and provide additional information.
Misconception 4: AI is a purely self-learning system
The promise of modern AI approaches is that it is no longer necessary to explicitly program the way a problem is to be solved. Instead, AI learns this itself to the greatest possible extent. This is not fundamentally wrong. However, it only works if the AI is provided with a sufficient number of examples of the problem with the corresponding solution and if the problem is presented in a way that is manageable for the AI system. Moreover, once trained, an AI system will only learn from new cases it encounters in operational use if they are presented to the system regularly for retraining together with the correct solution. In the absence of such a process, an AI system will not be able to adapt to new situations on its own.
It is also important to bear in mind that AI systems need more than just training. Their integration into other systems and processes as well as the administrative aspects of the resulting overall construct must not be neglected either. One possibility for proceeding more efficiently here and taking the burden off IT departments is the use of Machine Learning Operations (MLOps).
Misconception 5: Any problem can be solved completely with AI
The impression is often created (also by the marketing promises of some companies in the AI field) that any problem can be completely solved through the use of AI. However, in reality, there are practically no processes in most companies that can be mapped from start to finish by an AI system. It is more the case that AI serves as a precision tool that makes individual process steps better, faster or more efficient and thereby, together with a good concept and classic digitization approaches, makes a contribution to an optimized overall process.
To actually implement this operationally, it is advisable to adopt a strategic approach that not only takes into account specific problems, but also includes the overall processes as well as the upstream and downstream processes. In other words, a digitization strategy that takes into account the entire company, its employees (keyword change management), the IT landscape and the existing data.
Hopefully, these pointers will give you some guidance as you consider the potential of using AI in your organization. If you have any further questions, please do not hesitate to contact Dr. Sebastian Schönnenbeck and his colleagues: you can get in touch with us here.