EU AI Act passed in Europe – strong protection against misuse (safety)

The EU AI Act is the world’s first to regulate the use of AI. It is expected to form the basis of similar global legislation to ensure Terminator Skynet never happens.

Why do we need an EU AI Act?

AI, the game-changing technology of the 21st century, has the potential to outstrip the impact of the internet in the 20th century. Its reach and influence are genuinely epoch-making. To understand more about this revolutionary technology, read What is AI (Artificial Intelligence), how will it affect me?

However, despite its immense potential, AI has a darker side. It can be trained on incomplete or biased data sets, make decisions devoid of human perception, reasoning, or compassion, and even surpass human intelligence. These are critical issues that demand our attention.

The EU AI Act is crucial in this context. Its primary function is to instil trust in this new and powerful tool while providing a framework to assess and address inherent risks, such as human rights, associated with this rapidly advancing technology.

It is part of a broader AI innovation package and Coordinated Plan on AI. While some opposed the legislation, many AI proponents have welcomed it as it provides some certainty for development guidelines. It also paves the way for Europe to become a computing superpower, creating AI factories and AI startups.

What is the intent of the rules?

While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that must be addressed to avoid undesirable outcomes.

For example, it is almost impossible to find out why an AI system decided to take a particular action, e.g. whether someone has been unfairly disadvantaged, such as in a hiring decision or an application for a public benefit scheme.

The proposed rules will:

  • address risks specifically created by all AI applications.
  • prohibit AI practices that pose unacceptable risks.
  • determine a list of high-risk applications.
  • set clear requirements for AI systems for high-risk applications.
  • define specific obligations for deployers and providers of high-risk AI applications.
  • require a conformity assessment before a given AI system is delivered or placed on the market.
  • put enforcement in place after an AI system is placed into the market.
  • establish a governance structure at European and national level.

What is the risk?

The EU has identified several high-risk areas.

  • critical infrastructures (e.g., transport) could risk citizens’ lives and health.
  • educational or vocational training that may determine someone’s access to education and professional course of life (e.g., exam scoring).
  • safety components of products (e.g. AI application in robot-assisted surgery).
  • employment, management of workers and access to self-employment (e.g. CV-sorting software for recruitment procedures).
  • essential private and public services (e.g. credit scoring denying citizens the opportunity to obtain a loan).
  • law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence).
  • migration, asylum, and border control management (e.g. automated examination of visa applications).
  • administration of justice and democratic processes (e.g. AI solutions to search for court rulings).

All remote biometric identification systems are high-risk and subject to strict requirements. In principle, the use of remote biometric identification in publicly accessible spaces for law enforcement purposes is prohibited.

SkyNet is a very real possibility without legislation.

Brought to you by CyberShack.com.au