top of page

Human-centric AI

The increasing use of AI by both public authorities and enterprises presents an opportunity to generate added value for society in almost every sector as well as in the areas of societal value, such as health, education, transport, governance, security, and environment. Nevertheless, alongside this potential, it is also crucial to bear in mind that the use of AI entails significant challenges posed by the increasing ability of AI to make complex decisions and perform actions without human involvement or control. The use of AI can thus have significant consequences regarding fundamental rights, democracy and the rule of law as well as the social and economic equilibrium of society.

In light of this, one of the key principles outlined in Estonia’s Digital Agenda 2030 is that Estonia’s digital government must be human-centric, meaning that the use of digital solutions is not an end in itself but rather a tool for increasing the well-being of people. To ensure that AI solutions operate in line with people’s best interests, it is imperative to continuously prioritise and promote the values of human dignity, justice, equal treatment, privacy and security in the development and use of AI. Furthermore, it is important that society’s confidence in AI solutions is maintained and strengthened. Human-centricity and reliability are the key components to realising the social and economic benefits of AI and ensuring responsibility and sustainability in innovation.

Characteristics of a human-centric AI

To achieve human-centricity and reliability, AI solutions must be underpinned by the following aspects in particular:

  • Respect for human dignity and autonomy: AI solutions must be developed and used in a manner that respects human dignity, including by ensuring that they do not undermine a person’s independence or right to self-determination. It is imperative that AI does not coerce or manipulate people into doing anything that is detrimental to their interests. People must be properly informed and free to refuse to engage with AI systems. To achieve these goals, it is also important to maintain human supervision and control over AI. AI solutions must be developed and implemented in such a way that allows humans to intervene in the solution’s activities when necessary to prevent any undesirable effects. 

  • Equal treatment and justice: AI solutions must respect human diversity and refrain from any unfair bias or discrimination based on gender, nationality, age or other characteristics. It is crucial to guarantee equal access to the benefits provided by AI for all individuals, including those with special needs. To this end, it is essential to involve relevant stakeholders in the development of AI solutions and take their needs into consideration.

  • Privacy and personal data protection: the development and use of AI must ensure a high level of protection of personal privacy, including that of personal data, throughout the entire lifecycle of the system. On the one hand, this poses a significant challenge as AI solutions often rely on vast amounts of personal data as input for training the system or making decisions. On the other hand, the advanced analytical capabilities of AI allow it to draw far-reaching conclusions based on individuals' behavioural patterns and other data, including on their health, sexual orientation, physical characteristics, political views and other delicate matters. Drawing such conclusions may constitute an extensive interference in a person’s private sphere or create a risk of unlawful discrimination.

  • Reliability and safety: AI systems have to operate accurately, securely, safely and reliably to prevent any damage to humans or the environment. Accuracy means ensuring that the decisions made by the AI are sufficiently correct and that errors are minimised, considering the purpose of the AI and the impact of its decisions. In addition, AI solutions must be protected against potential misuse, (cyber) attacks and other vulnerabilities. Unintended adverse effects must also be prevented, which requires the AI to be robust and resilient, even in situations where it is used in an unintended or undesired manner or exposed to unexpected circumstances.

  • Transparency: the core operating principles, purposes and impacts of the AI must be clearly understood and verifiable. In cases where the decision made by the AI has a significant impact on a person, this decision should be adequately explained to the person. Such transparency fosters increased trust, improves the quality and accuracy of the AI solution and helps to identify risks and shortcomings to prevent or minimise harm to people and the environment. In addition to documenting and auditing the working processes of AI solutions, it is also important to ensure transparency by providing people with the appropriate knowledge and tools to adequately understand and interact with the AI system.

  • Responsibility: the developers, providers and users of AI are responsible for the legal, ethical and proper use of these systems as well as for the consequences of their use. To this end, clear rules must be established, including those outlining the specific obligations of the parties, and it must be ensured that effective legal remedies can be applied in the event that potential risks materialise and damage occurs.

  • Social and environmental well-being: in the development and use of AI systems, their impact on the society and environment as a whole must be taken into consideration and, where appropriate, measures must be taken to mitigate any negative effects. Moreover, the state should promote the development and use of AI solutions that contribute to resolving global problems, such as climate and environmental issues, or that advance the interests of people and society, such as justice, democracy, sustainability, and overall quality of life.

bottom of page