AI Regulation

What is AI Regulation?
Regulating AI means developing policies and laws, guidelines, and standards for the development and use of AI systems. Generally, the aim of governing AI should be to ensure that AI systems are developed and used in ways that are fair. Regulatory frameworks are set up to address issues such as risk management, transparency obligations, and the prohibition of certain harmful AI practices like exploitation of vulnerabilities, social scoring, etc. At the gist of it, AI governance shapes incentives to align the risks private companies or other actors take with the safety needs of society as a whole.
The regulation of AI concerns all stakeholders of AI, which is not limited to developers and users, and includes everyone. This is why a lot of ranging perspectives exist. Companies that develop AI usually argue for less regulation, as it enables them to research and deploy models without restrictions. However, this can leave society underprepared for the economic and social changes AI brings, like job loss or privacy concerns. Some governments and individuals vouch for regulation because it protects their rights as new developments happen, so that AI is built more aligned with our values. This way, we do not have to clean up after bad consequences, and we are prepared for any problems that may arise because we have a good support system. The debate between the two ideas boils down to this: while regulations might slow innovation, they can also help us avoid negative or harmful consequences and build systems to handle problems before they arise.
While (technological) innovation is essential for progress and the betterment of humankind, it needs to be regulated so that it actually helps us instead of being our downfall. A simple example of this is that if the first large language model was discussed and controlled before release, the transition into a world with it would be so much smoother. People would not lose their jobs without a backup, because the government could prepare for this case. Schools would build a better policy and education around the use of LLMs that support students instead of taking over their thinking. Regulation, done right, leads our transition into new technology to be smoother and less harmful.
Unfortunately, not all regulations are always good. For example, the CLOUD Act in the US is a federal law that states that even if data is stored outside of the US borders, the US has a right to see the data of US-based technology under certain conditions if they ask to do so (2).
How Far Are We?
AI regulation is catching up to the rapid pace of AI development, but the progress is not even across the world. The EU has a leading role in regulating AI, with the AI Act. The AI Act protects individuals living in the EU by looking out for their rights to privacy and well-being. Even though many of the software we use is not based in the EU, the AI Act also exerts soft power over them by setting policies that countries end up conforming to, so the EU user base is not excluded (1).
The US does not have a comprehensive AI law like the EU AI Act, but some states have regulatory frameworks. The UK favors a more "pro-innovation" stance and relies on old regulators to have sector-specific AI rules. Many other countries are also drafting and working towards having AI-specific regulations.
Why its Important to Regulate
Simply put, good regulation is our best shot at making AI align with human values and protect civil liberties and human rights, such as privacy and free speech (1).
One of the reasons to regulate AI is that companies or creators of these technologies often have profit as a main incentive rather than the well-being of humankind. Sometimes, profit and public good align, but not often. For this reason, a comprehensive way to balance corporate interests with societal needs is necessary. This is usually what we trust our governments with.
Another important reason is because of the future of AI. Jump with me into a hypothetical (but realistic) idea of the future here. Say the workforce is entirely automated and every job is taken over by AI. There are various ways this can play out. One version could be where companies own all of the AIs. They would control all the wealth, leaving most people without income or resources. But with thoughtful regulation, such as a system like universal basic income, or other policies that ensure fair distribution of resources, we could create a society where everyone benefits from AI-driven productivity. Instead of a dystopia where a few control everything, we could have a utopia with more leisure, better work-life balance, and shared prosperity.
You can see why regulation would be important to ensure a fairer outcome here. If we give control to companies whose goal is to increase profit, we can't really say they are innovating for the benefit of humankind. An example of this is social media algorithms. They are designed to make money off of your attention and time rather than provide you a beneficial platform for networking over the internet. It is proven that social media ruins the mental well-being of individuals who use it (1), but we still continue in the name of profit.
Summary
In summary, regulating AI is not about stifling innovation, it's about guiding it. With the right policies, we can embrace AI's potential while protecting our rights, well-being, and future.
Sources!
- Hendrycks, D. (2025). _Introduction to AI safety, ethics, and society_ (p. 562). Taylor & Francis.
- Wikipedia, CLOUD Act
- Wikipedia, Regulation of AI
- Wikipedia, AI Act
- AI Regulations Around the World
- Veale, M., & Zuiderveen Borgesius, F. (2021). Demystifying the Draft EU Artificial Intelligence Act—Analysing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International, 22(4), 97-112.
- The Artificial Intelligence Act: critical overview
- e Silva, N. S. (2024). The Artificial Intelligence Act: critical overview. Available at SSRN.
- Finocchiaro, G. (2023). The regulation of artificial intelligence. AI & SOCIETY, 1-8.