Under the spotlight: Balancing AI ambition with robust control
- Aiyana Lacota
- Jun 20
- 4 min read

Artificial intelligence is being championed on two contrasting fronts. On one side, “AI for Good” embodies a global, hopeful vision, leveraging AI to accelerate progress on the Sustainable Development Goals (SDGs). On the other, “AI for Control” urges firm regulation, containment, and human oversight to mitigate emerging risks.
Hamburg declaration: AI aligned with SDGs
At the Hamburg Sustainability Conference (HSC) on 2–3 June 2025, held in Hamburg, Germany, approximately 1,600 participants from governments, international organisations, business, academia, and civil society convened to advance the 2030 Agenda through new global alliances and concrete initiatives. A key outcome was the adoption of the Hamburg Declaration on Responsible AI for the SDGs, a multi-stakeholder effort developed over 12 months by over 90 organisations and opened for public comment. The Declaration is structured around the five pillars, People, Planet, Prosperity, Peace, Partnerships, committing signatories to ethical AI, transparency, inclusion, environmental alignment, and equitable participation.
This event was co‑hosted by Germany’s Federal Ministry for Economic Cooperation and Development (BMZ), UNDP, the Michael Otto Foundation, and the City of Hamburg, and framed HSC as a “delivery platform” for the UN 2030 Agenda, not merely a talk-shop.
A forum of global leadership
High‑profile keynotes anchored the dialogue. On 2 June, Federal Minister Reem Alabali‑Radovan highlighted AI’s promise for inclusive development and urged multi‑sector collaboration. Germany’s Vice‑Chancellor and Finance Minister Lars Klingbeil emphasised that amid geopolitical fragmentation, multilateralism is essential to tackle global challenges. The event also featured insights from speakers such as Rumman Chowdhury, CEO of Humane Intelligence, addressing responsible AI; and David Craig on biodiversity reporting within sustainable business contexts.
Agenda and outreach beyond AI
HSC 2025 featured over 60 sessions spanning financial architecture, sustainable urbanisation, digitalisation, climate resilience, and equitable economic systems. It kicked off Hamburg Sustainability Week (1–6 June), integrating events for public engagement and business showcases. A notable side‑event, organised by BMZ and UNDP on 2 June, invited global stakeholders to shape the Hamburg Declaration draft, ensuring inclusive participation, especially from the Global South.
Significance and next steps
The Declaration signals a primed shift from voluntary ideals to tangible action. It pledges commitments to energy‑efficient AI, AI education for girls, inclusive capacity‑building in the Global South, and transparency in model usage. Nonetheless, its voluntary framework remains open-ended; implementation hinges on each signatory’s follow‑through and how these commitments are woven into global and regional policies.
The control imperative: Safety‑first thinking
Global AI voices are increasingly focused on safety-first frameworks.
Yoshua Bengio’s initiative LawZero, backed with USD 30 million from the Future of Life Institute and Schmidt Sciences, aims to build a “Scientist AI” capable of detecting and blocking unsafe behaviours. It represents a pivot from mere intelligence to AI safety by design.
Gaia Marcus of the Ada Lovelace Institute calls for strong regulatory frameworks, pointing to widespread public concern. She highlights that 88 % of respondents support post-deployment safety mechanisms. The message is clear: people want legal certainty to complement innovation.
Mustafa Suleyman, co-founder of DeepMind, frames containment as a mix of technical, social, and legal constraints. Without these, he warns, the unchecked advance of AI could undermine democratic governance and societal stability.
Technical foundations: Alignment, control and explainability
Researchers distinguish between “control”, enforcing how an AI acts, and “alignment”, ensuring its values match human intentions. Both are essential.
In critical areas like healthcare and governance, explainable AI (XAI) becomes vital. Trust and accountability depend on systems that can justify their decisions. Without explainability, AI risks becoming a “black box” in contexts where transparency is non-negotiable.
The concept of meaningful human control has emerged as a practical standard. Scholars advocate for four conditions: clear task definition, shared representation of tasks, identifiable human authority, and traceable AI decisions. These principles are designed to prevent gaps in responsibility when things go wrong.
Reconciling optimism with caution
AI for Good and AI for Control represent diverging yet complementary visions.
The former stresses international collaboration, voluntary commitments, and innovation-led pathways to accelerate SDGs, from climate action to equitable education. The latter focuses on rules, enforceability, and a pre-emptive approach to prevent AI-induced harm.
This divergence reveals a deeper truth: ambition needs boundaries. Without robust governance, even the most inspiring declarations risk enabling unintended consequences.
Bridging commitments and enforceability
The Hamburg Declaration is a promising milestone. But its long-term success depends on embedding its principles into measurable outcomes and enforceable policies. Projects like LawZero show the technical feasibility of self-regulating AI systems. Yet the challenge remains in scaling such innovations from pilot stage to global application. The alignment-control-explainability triad offers a powerful multidisciplinary blueprint. However, translating this framework into concrete policy is an ongoing challenge for governments, institutions and industry leaders alike.
The twin imperatives of “AI for Good” and “AI for Control” are not mutually exclusive. In fact, they are interdependent. One provides the vision to harness AI for global betterment; the other supplies the checks and balances needed to make that vision viable and safe.
For international initiatives like the Hamburg Declaration to have real impact, voluntary pledges must evolve into regulatory and operational frameworks grounded in ethics, engineering, and global governance.
More information: https://www.sustainability-conference.org/en/