Digital ethics: What does it mean?

Digital ethics, also called AI ethics when focusing on AI, is the buzzword of the day. But what is covered by this term? Why is it so popular right now? That is what we will reveal here. We will also explain the special role that digital ethics plays in the field of artificial intelligence.

Digital ethics is a relevant topic for companies today: from a compliance perspective alone, it is necessary to ensure that processes and decisions are traceable when using artificial intelligence. Examples include potential discrimination against applicants who are unintentionally disadvantaged by AI-based selection processes, or automated fraud detection that gives more or less consideration to certain groups of people. Companies risk severe sanctions in both cases. Even for processes that are not critical from a regulatory perspective, however, companies want to have certainty about what is decided and initiated by AI. How else will vulnerabilities be discovered and rectified? In short, we need trustworthy AI. In digitization projects, Comma Soft’s consultants help their customers to find this kind of secure solution by putting the principles of digital ethics into practice. The following is an explanation of what this means in specific terms.

Digital ethics – a definition

Digital ethics is a branch of the philosophical discipline of information ethics. While information ethics generally takes a critical view of the use of information and information-processing technologies, digital ethics focuses specifically on the moral limits of digitization. The main themes are the relationship between man and machine and the standards of a society shaped by digital technology. Regarding artificial intelligence (AI), digital ethics asks, for example, to what extent decisions made by AI are traceable and to what extent humans can and should trust these decisions.

Genesis of the theory

The term digital ethics was first used in 2009 by philosopher and professor of information science, Rafael Capurro, who founded the International Center for Information Ethics (ICIE) and served as a member of the EU Commission’s European Group on Ethics in Science and New Technologies (EGE), among other things. Since then, discussions have been held in numerous economic and political contexts under the heading of digital ethics on how the digital transformation will affect, for example, the world of work, data protection and individual autonomy. The issues of artificial intelligence and robotics are increasingly playing an important role in this context. In the German-speaking world, the issue has already been driven forward since 2005 by the Petersberger Gespräche which offer an interdisciplinary forum for discussion on digital topics such as artificial intelligence.

The political relevance of digital ethics

Digital ethics is an important concept in the context of what constitutes socially and morally acceptable digitization and has recently been discussed at the highest political level, especially when that discussion relates to artificial intelligence. The EU and OECD started the ball rolling on AI back in 2019 with their Ethics Guidelines for Trustworthy AI. It is therefore not surprising that industry and business are also embracing digital ethics and tailoring their actions accordingly: for example, on September 29, 2021, the EU-US Trade and Technology Council (TTC) agreed on what constitutes trustworthy AI, following the OECD’s recommendations.

Trustworthy AI

In the context of artificial intelligence, digital ethics addresses specific issues including, for example, whether AI can be trusted, how it makes decisions and what its decisions are based on. According to the EU guidelines, trustworthy AI is characterized by three elements. It should:

  • Comply with all applicable laws and regulations.
  • Adhere to ethical principles and guarantee values.
  • Be robust, both from a technical and social perspective since. Even with good intentions, AI systems can cause unintentional harm.

Each of these three elements is necessary, but none of them is sufficient in itself in order to achieve the goal of creating a trustworthy AI. Ideally, all three elements will work together harmoniously and overlap in how they function. An integrated approach to AI strategy and implementation makes this possible. The core requirements formulated by the EU are also helpful.

Graphic derived from the presentation of the European Commission’s Ethical Guidelines for Trustworthy AI.

Seven core requirements for trustworthy AI

In its Guidelines for Trustworthy AI, the EU has set out seven core requirements that should be taken into account when using AI. In summary, they include the following:

1. Priority of human agency and oversight

AI systems should support human decision-making and serve people – not the other way around. This will be possible if people can continue to act autonomously and decide for themselves when, how and whether AI is used. AI systems should also be conducive to democracy and fundamental rights and thereby support a just society.

2. Technical robustness and safety

For AI systems to behave reliably and as intended, they must be robust, precise, and reproducible. This is also a prerequisite for them to be resilient to external influences such as cyber-security attacks and for the underlying data, model and infrastructure in hardware and software to be protected. The overriding principle is the prevention of harm to living beings and the environment.

3. Privacy and data governance

Protecting data and privacy is one of the top priorities for AI systems. An AI that fulfils this requirement not only has access to all data, but also ensures that the quality and integrity of the data to be processed are precisely regulated and logged. Moreover, only appropriately qualified personnel will have access to the AI systems and their data.

4. Transparency

On the one hand, the transparency of an AI system is closely linked to how it can be explained: data and processes that contribute to a decision should be documented and as traceable as possible so that every decision made by the system is transparent. In addition, users must be able to recognize that a chat bot, for example, is an AI and have the choice between AI and human interaction.

5. Diversity, non-discrimination, and fairness

Since AI systems are meant to serve people and society as a whole, they must not exclude anyone. Access should be barrier-free and prevent unfair bias that might be caused, for example, by age, gender, or ability. Appropriate steering and control processes as well as consultation and participation of a wide range of stakeholders are therefore advisable when implementing AI projects.

6. Societal and environmental wellbeing

The lifecycle of a trustworthy AI system considers the wider society and environment. Sustainability and environmental responsibility should therefore be promoted and the social impact of the AI system on society and democracy should be adequately addressed.

7. Accountability

Accountability entails clear responsibility for the individual processes and parts of the AI system. This means, among other things, that it is possible for algorithms, data, and decision-making procedures to be verified, for example by internal or external auditors. Impact assessments and provisions for adequate legal protection are advisable for high-risk AI systems.

Explainable AI

Tools for visualizing and making AI results reviewable are needed so that AI does not remain a grey zone . This issue is covered by the buzzword “explainable AI”. Firstly, the AI model must be clearly recorded and described. Secondly, it must be possible to map the individual results clearly onto the structure of the model. Common questions here include:

  • Which parameters of the model are crucial in making the decision?
  • What is the probability of the decision?
  • For images in particular: which parts of the input are relevant for the decision?

Suitable technical tools can therefore be used to understand the AI’s decision better and react accordingly.

However, explaining AI decision-making requires not only technical tools, but also human resource development. Only when employees are trained in the use of the AI system can they recognize, and question less-than-ideal decisions made by the AI system.

Putting digital ethics into practice

All the requirements and guidelines are complex. What’s more, they keep changing as the social discourse develops. Companies that have the ambition to implement digital ethics often cannot do this on their own due to resource constraints. Here, it is advisable to seek advice from AI and digitization experts who can provide targeted support in strategic positioning and technological implementation. If you have any questions concerning this matter, please feel free to contact us.