The case for binding AI regulation: Why ambition alone is no longer enough
- Ira Grange

- Jun 16
- 4 min read

Artificial intelligence (AI) has emerged as one of the most powerful forces shaping our time. From helping doctors diagnose disease to improving crop yields, AI is already touching almost every part of human life. Supporters say it will help solve some of the world’s toughest problems, from education to climate change. This vision, often captured in the term “AI for Good”, is hopeful, ambitious, and widely shared.
But another perspective is gaining ground: that without proper rules and oversight, AI could cause serious harm. Experts and citizens alike are now asking: how can we make sure AI is used responsibly, and safely?
The recent Hamburg Declaration on Responsible AI, signed during the Hamburg Sustainability Conference in June 2025, was an important step in this direction. More than 40 governments and organisations promised to make AI support the Sustainable Development Goals (SDGs), such as reducing inequality, protecting the planet, and improving access to education. But as important as this declaration is, there is a problem: it is not legally binding.
Current AI rules: A patchwork, not a system
Across the world, some countries have started to create rules about AI. But these rules are not consistent, and in many places they are still being developed.
In the European Union, the AI Act became law in August 2024. It’s the world’s first big law focused only on AI. It divides AI systems into four categories:
· Unacceptable-risk AI (like systems used for social scoring) is banned.
· High-risk AI (like systems used in hiring or public services) must follow strict rules, including testing, transparency, and human oversight.
· Limited- and minimal-risk AI (like chatbots) have lighter requirements.
The EU also demands that developers explain how their AI works and train users on how to operate it. Starting in 2025, every EU country must set up “regulatory sandboxes” so companies can test their AI in safe environments.
In the United States, there is no single national law for AI. Instead, each state has its own rules. For example:
· Colorado requires companies to test their AI for bias.
· New Hampshire and Tennessee have passed laws to fight “deepfakes”, fake images or videos generated by AI.
· Maryland requires AI transparency in government.
There was a federal executive order under President Biden, but it has since been watered down.
The United Kingdom prefers a “light-touch” approach. Instead of making one big law, it lets different sectors (like health or transport) manage AI with help from expert regulators. The UK has also set up an AI Safety Institute to test powerful AI models.
Other countries, like China, India, and Australia, also have guidelines or national AI strategies. China has taken stronger steps, like forcing companies to label AI-generated content and limit misinformation.
In 2024, over 50 countries signed the Council of Europe’s AI Convention, which promises to protect human rights and democracy in the age of AI. But this treaty needs each country to pass it into national law before it can be enforced.
Why current laws are not enough
Despite these developments, today’s rules are not keeping up with how fast AI is changing. Here’s why:
· Fragmentation: The world has a patchwork of laws, some strict, some vague, some still being written. This leads to confusion and unfair advantages.
· Lack of enforcement: Many guidelines sound good on paper but are not backed by law. There are no serious penalties when rules are broken.
· Limited focus: Most laws look at small parts of AI, like chatbots or deepfakes, but ignore the bigger risks. As AI becomes more powerful and independent, these gaps become more dangerous.
· Speed of innovation: AI is advancing faster than governments can regulate it. By the time laws are written, the technology has already moved on.
What we really need: Global, enforceable rules for AI
To make AI work for everyone, we need a smarter, more united approach. A future-ready AI framework should include:
1. Clear and binding laws: Rules need to have legal power, not just be suggestions. Companies must know what they can and cannot do.
2. Technical knowledge: Lawmakers must work with engineers and scientists to understand how AI actually works. Regulations must include things like testing for bias, making systems explainable, and adding safety measures from the start.
3. Global cooperation: AI doesn’t stop at borders. A global system, perhaps similar to how we manage climate change, can help avoid gaps and promote fairness.
4. Flexible governance: Laws must be able to change and adapt as AI grows. This means having regular reviews, public feedback, and smart risk monitoring.
5. Human involvement: People affected by AI decisions, especially in vulnerable communities—must have a voice in how it is regulated.
Regulation does not mean stopping progress
Some worry that too many rules will slow down innovation. But regulation is not the enemy of progress. In fact, it is what makes progress safe and fair.
Take the example of LawZero, a project led by AI pioneer Yoshua Bengio. It is creating a “scientist AI” that can block harmful actions if it isn’t sure they’re safe. This is a great step, but even this project needs to be tested, verified, and monitored under public laws, not just private promises.
Mustafa Suleyman, co-founder of DeepMind, calls for “containment”, adding legal, technical, and social boundaries around powerful AI to make sure it does not get out of control. This is not fearmongering. It is simply good governance.
It’s time to turn talk into action
The Hamburg Declaration shows the world is finally taking AI seriously. But without strong, enforceable rules, even the best promises won’t be enough.
Governments, companies, and citizens must work together to move from aspiration to enforcement, from guidelines to laws, from good intentions to real accountability.
If we truly want AI to be a tool for good, then it must also be a tool governed by law



