AI Regulation, A Story Told 1000 Times Before
- Imran Rassiwalla
- Aug 4, 2025
- 3 min read
The year is 2024. The United States Senate passes the Kids Online Safety Act in a 91-3 vote. This was the first new law the US had passed to protect kids on the internet, and came after decades of research studies, some of which sampled 10,000+ people, showed a direct correlation between cyberbullying and suicidal ideation, in addition to no less than half a dozen high-profile suicides in relation to cyberbullying. The earliest of these suicides was in 2006. The year is 1986, Congress passes the Electronic Communications Privacy Act, after the past two decades had seen a proliferation of satellites but no restriction on tampering with such signals. Congress only passed the law after the HBO signal had been hijacked. The year is 1934, Congress passes the Communications Act of 1934 in order to regulate radio frequencies and prevent unlicensed broadcasters from “wave-jumping” onto another station’s frequency, something that had been occurring for the past two decades. These three, and hundreds more, show a clear pattern in litigation: the government follows the technology, and you’re lucky if it follows it close enough to pass a bill after the first high profile incident like it did in 1987. Sometimes it takes 7 or more, as it had in 2024.
The year is 2025. Over the past year, many states have attempted to regulate the increasingly dominant issue of Artificial Intelligence. Some have been successful, while others failed. Arizona, California, Louisiana, New Jersey, and Virginia have been some of the states who have most recently vetoed AI regulation. Thus, it is easy to label AI just the same as all the rest of these technologies, something that will inevitably spread unchecked until someone or many people finally get hurt. However, that would be overly reductive, as would be calling AI a simple partisan issue as Louisiana and Virginia have Republican governors while Arizona, California, and New Jersey have Democratic governors. Yet, in all 5 states the bill passed the legislature itself, so AI regulation sits on this odd middle ground where there are supporters and detractors of regulation across the aisle.
When examining the reasoning behind these vetoes they broadly fall into three camps. In his veto statement Virginia Governor Glenn Youngkin argued that existing laws already suffice, noting that “[t]here are many laws currently in place that protect consumers and place responsibilities on companies relating to discriminatory practices, privacy, data use, libel, and more.” The second reason is out of fears that such AI regulation runs the risk of violating freedom of speech protections, as was argued by Louisiana governor Jeff Landry when he vetoed a bill criminalising AI deepfakes. Finally, there is a concern about the inherent vagueness of these laws stifling innovation, a concern that has merit as when AI regulation was considered in Connecticut hospitals warned how the vagueness of such regulations could limit their ability to use AI for medical innovations.
In contrast, when examining the reason people support AI regulation the reasoning is largely driven by fear of what's to come. While AI deepfakes and abuse of ChatGPT on education campuses are already prevalent, leading AI policy analyst Dean W. Ball believes that by the end of 2026 AI could develop “qualitatively transformative capabilities” at which point it may be too late to regulate. States seem to be aware of this concern, as although there have been many high profile vetoes of AI regulations there is clearly an interest in regulation, with this year alone already having seen 900+ proposed bills across all states.
It seems the breakneck pace of AI advancement has motivated legislators in a way relatively unseen with the regulations of new technologies, and that the primary concern dividing supporters from detractors is the difficulty in implementing AI regulation into a constitutional and balanced legal framework. It remains to be seen if states can strike this balance or if they will go too far in one direction or the other, and furthermore how they may learn from AI regulation in other nations, only time will tell.

Comments