The future of AI depends on explainability and collaboration, says Anita Schjøll Abildgaard, CEO and Co-Founder of Iris.ai.
As the European Union moves forward with landmark regulations on artificial intelligence (AI), we have reached a critical juncture. The proposed AI Act aims to establish Europe as a global leader in ethical AI development, promoting transparency, fairness, and accountability.
However, laws can only achieve so much. Truly realising the vision of trustworthy AI will require open collaboration between policymakers, companies, developers, and domain experts across civil society. Neither top-down legislation nor self-regulation in isolation can ensure socially responsible innovation. What is needed is a constructive public-private partnership to align policies with technical realities while reflecting the concerns of citizens.
The Importance of explainable AI
A key principle underlying the AI Act is the requirement for high-risk AI systems to be transparent and explainable. This means that companies must be able to explain how their AI systems make decisions. At first glance, this may seem overly burdensome to developers. But explainability is not about revealing proprietary algorithms or restricting innovation. Rather, it allows us to build trust by demonstrating that AI behaves reliably and as intended.
Explainable AI also promotes accountability. When we understand how an algorithm arrives at a decision, we can more easily audit for biases and errors. This protects individuals and groups from potential discrimination by AI systems. With the rapid integration of AI into critical domains like healthcare and finance, explainability is an ethical imperative. We owe it to those impacted by AI to validate that it acts fairly.
The call for public-private cooperation
Constructively shaping the future of AI requires transparent collaboration between policymakers and industry experts. Regulations are only effective if grounded in technical realities. At the same time, we must ensure policies reflect the values and concerns of society as a whole.
Ongoing forums that facilitate open dialogue between stakeholders can help bridge disconnects. For instance, the EU’s High-Level Expert Group on AI has convened companies, academics, and civil society groups to inform policy development. And while this group is a step in the right direction, more should be done. Wider forums that involve more experts and members from all industries are needed to ensure good governance. Broad cooperation allows us to harmonise public and private interests when moulding the AI landscape.
The AI Act introduces mandatory risk management systems for high-risk applications. However, ethical AI development demands proactive dedication across the entire industry. We must work diligently to align emerging technologies with our values.
Integrating ethics into design processes will allow us to steer AI’s trajectory responsibly. Constructing detailed frameworks that translate principles into practice can guide engineers in building morally sound systems. We can draw inspiration from decades of work in computer ethics and involving ethicists directly in development teams.
Responsible innovation is not about stifling progress but rather developing AI that enhances human dignity. Thoughtful oversight today will allow us to actualise AI’s immense potential through wise policy.
The transformative potential of AI
AI has already generated remarkable advances across domains, from medical diagnostics to research to renewable energy. Managed equitably, AI can drive broad-based prosperity. But the ethical application of these powerful technologies remains imperative.
With a shared commitment to transparency, collaboration, and integrity, Europe’s AI Act can become a model for balancing innovation and ethics. But beyond regulation, we need collective diligence to develop AI that embodies human values. If guided by wisdom, AI promises a more just and vibrant future for all. By working together, we can build AI that serves society responsibly.
The proposal of landmark AI regulations presents what could be a pivotal moment in shaping the character of emerging technologies. It is crucial, however, that the regulation is not rushed and implemented in a considered manner to not disincentivise talent entering the industry or hinder the open access ecosystem. Realising the vision of ethical and beneficial AI requires cooperation among all stakeholders.
Both effective policy and conscientious development efforts are needed to steer AI’s progress responsibly. With openness, compassion, and moral courage, we can build an AI-enabled world that uplifts human dignity. But this future depends on working together today.