Call for independent national market surveillance authorities under the AI Act

In an era where artificial intelligence (AI) is becoming deeply integrated into every aspect of our lives, ensuring its safe and ethical use is paramount. Recently, the Center for AI and Digital Policy Europe (CAIDP Europe) co-signed - along with 33 other civil society groups - a crucial letter addressed to the European Commission, emphasising the need for independent national market surveillance authorities (NMSA) under the EU AI Act. This joint initiative, spearheaded by The European Consumer Organisation (BEUC), underscores the collective concern of numerous stakeholders about the potential risks and ethical dilemmas posed by AI technologies.

The urgency of independent oversight

As AI systems become more complex and pervasive, the risks associated with their misuse or malfunction also grow. From biased algorithms affecting hiring processes to autonomous vehicles' safety concerns, the implications of unregulated AI can be far-reaching and severe. The letter, which CAIDP Europe proudly co-signed, highlights the necessity of independent oversight to mitigate these risks effectively.

CAIDP Europe and other civil society organisations (CSOs) are calling on the European Commission and Member States to fulfill their obligation to ensure the complete independence of NMSAs under the EU AI Act. Karine Caunes, CAIDP Europe's Executive Director, stated, "We welcome the Commission's and the Member States' efforts to set in place swiftly an AI governance framework. However, effective enforcement requires ensuring the complete independence of NMSAs in compliance with the EU AI Act."

Key arguments presented in the letter

The letter outlines several compelling arguments for establishing independent market surveillance authorities:

  1. Consumer Protection: AI technologies have the potential to impact consumer rights significantly. From privacy concerns to unfair business practices, independent authorities would be crucial in protecting consumers from potential abuses.

  2. Ethical AI Development: Ensuring that AI systems are developed and deployed ethically is a complex task. Independent surveillance can enforce standards that prioritise ethical considerations, such as transparency, fairness, and accountability.

  3. Mitigation of Bias and Discrimination: AI systems are not immune to biases, which can lead to discriminatory outcomes. Independent oversight can help identify and rectify such biases, promoting fairness and equality.

  4. Public Trust: For AI technologies to be widely accepted and trusted, the public needs assurance that these systems are safe and ethical. Independent authorities can build this trust by providing transparent and unbiased evaluations of AI systems.

  5. Legal Obligation: The complete independence of NMSAs is a legal obligation imposed by the EU AI Act and has been clarified by the European Court of Justice. Operational independence is not sufficient; true independence from both State and market actors is necessary to ensure the effective protection of fundamental rights.

A unified call to action

The joint letter, co-signed by various organisations, represents a unified call to action. It reflects a broad consensus across civil society organisations on the necessity of independent oversight in the AI landscape. This collective voice aims to influence EU policymakers to incorporate provisions for independent market surveillance authorities in the final version of the AI Act.

Additional concerns

CAIDP Europe is particularly concerned that some States are designating bodies deprived of full independence from State authorities as NMSAs. Such practices undermine the enforcement of the AI Act and pose risks to the effective protection of fundamental rights. The European Union Agency for Fundamental Rights has highlighted that a lack of resources undermines effective enforcement, further complicating the role of NMSAs.

Moreover, designating data protection authorities or Digital Services Coordinators as NMSAs can lead to a negative spill-over effect on their independence and create asymmetry in the enforcement of the EU AI Act across Europe. The Commission and Member States' failure to respect their obligations could inevitably lead to new cases brought before the European Court of Justice.

Looking ahead

As we stand at the cusp of a new AI-driven era, the decisions we make today will shape the future. By advocating for independent national market surveillance authorities, we are not only addressing current concerns but also laying the groundwork for a safer, more ethical AI ecosystem. Our commitment to this cause reflects our dedication to safeguarding public interests and ensuring that technological advancements benefit society as a whole.

At CAIDP Europe, we urge other stakeholders, policymakers, and the public to support this initiative. Together, we can create a robust framework that fosters innovation while protecting the fundamental rights and values that underpin our society.

For more details, you can read the full letter here and CAIDP Europe's analysis conducted under our strategic priority "Consolidating a human centric governance of AI" available here.

Next
Next

CAIDP calls on UK watchdog to uphold rights in AI governance