Although the Trump administration purports to support returning control over education to the states, a provision buried in the reconciliation bill advancing through Congress would do exactly the opposite: it would impose a sweeping ten-year moratorium on nearly all state laws regulating artificial intelligence (AI)—including those governing AI use by colleges and universities. If passed, the measure would override existing state laws and strip away state power to develop new policies to address the risks posed by fast-evolving AI technology. And, because there are no comprehensive federal laws regulating such uses of AI, the result would be a dangerous regulatory vacuum.
The provision prohibits state and local governments from enforcing any law “limiting, restricting, or otherwise regulating” AI systems used in interstate commerce. The only exceptions are state laws that facilitate AI use. The measure would invalidate a wide range of current and future AI-related regulations, including those designed to protect patients, job seekers, students, consumers, and voters.
Creating a Wild West for AI Companies
Republicans are pushing the AI-regulation moratorium through the budget reconciliation process, which allows passage in the Senate with a simple majority. While the House version includes a blanket preemption of state regulation of AI, the Senate version ties the moratorium to state eligibility for funds allocated to support states’ expansion of broadband access.
Proponents claim that the ban on state regulation of AI will facilitate development of AI by preventing a patchwork of regulation in different states, ensuring U.S. competitiveness with China. But that argument only holds if a robust federal regulatory framework exists to replace state regulation—and in reality, no comprehensive federal regulation of AI exists. Without federal regulation, the state AI-law ban is an enormous give-away to the corporations that stand to benefit from unregulated use of AI and leaves students and taxpayers at serious risk.
Without federal regulation, the state AI-law ban is an enormous give-away to the corporations that stand to benefit from unregulated use of AI.
As AI technologies develop, colleges and faculty are exploring how to benefit from new tools, such as AI systems for providing academic advising, tutoring, grading exams, or translating materials. For example, Michigan State University recently launched a pilot program that provides a group of students with access to AI tutoring. Students are increasingly using AI tools in the classroom, and many colleges are considering how best to incorporate AI tools into course curriculums. However, as with any new, transformative technology, AI advances come with attendant risks that require thoughtful and timely responses. States can be nimble in responding to these risks, providing needed guardrails that can fill the gap in federal regulation. States can also serve as testing grounds for new approaches to regulation of AI, and successes and failures in state regulation of AI tools can provide valuable lessons for developing federal policies.
Eliminating Current Protections and Exposing Students to Risks
The reconciliation bill provision would wipe out all existing and future state oversight of AI, leading critics to call the move one of the “largest federal takeovers of state power in U.S. history.” In the education context, the measure would block states from protecting students from an array of AI-related risks. For instance, the ban would prevent states from enacting regulations to increase transparency or address discrimination arising from the use of AI decisionmaking tools in college admissions and financial aid determinations. The ban would also prevent states from imposing limits on the replacement of human instruction and interaction with automated AI tools.
Federal law invests states with significant oversight authority over colleges and universities, tasking states with enforcing state consumer protection laws and helping ensure the value of investment in higher education. The moratorium would hamper states’ abilities to carry out this oversight role.
AI tools are already being used in many colleges’ admissions processes. For example, some admissions officers report using AI tools to review transcripts or recommendation letters, while others report using AI to communicate with applicants. While these tools can help increase efficiency, and even potentially help with reducing bias in the admissions process, the use of AI tools in admissions or financial aid determinations also raises concerns that the tools themselves could be tainted by biases related to sex, race, or other characteristics. At the University of Texas at Austin, the school stopped using an AI tool to evaluate applicants for its Ph.D. program in computer science after the tool, which ranked applicants on their likelihood of being admitted, was disclosed to the public and faced public criticism. Critics of the tool argued that because the algorithm was trained based on profiles of previous applicants who were accepted, the tool might replicate past admission biases against underrepresented groups. The tool, which was reportedly trained to search for keywords such as “best” when evaluating recommendation letters, was also criticized for making it harder for underrepresented applicants to stand out by decreasing the consideration given to recommendation letters in the admissions process. In this case, the school faced opposition after employees publicized the use of the tool after it had been in use for years outside of public view. Without state regulation of AI tool use, colleges will not be required to disclose such use, which would shield AI tool use from public scrutiny.
The reconciliation bill provision banning state regulation of AI would nullify proposed legislation in New York and Washington and existing law in Colorado that impose disclosure requirements and require assessments of potentials for bias in AI decisionmaking tools used in high-stakes decision-making, including college admissions. In the absence of federal regulation of AI use in college admissions or financial aid decisions, it is critically important that states retain the authority to regulate such uses of AI to ensure transparency and fairness.
The reconciliation bill’s moratorium would also preempt state laws aimed at regulating AI in order to ensure education quality. In Illinois, for example, a new law prohibits community colleges from providing courses taught solely by AI. The bill recently passed both houses of the legislature and is awaiting signature by the governor. The law permits community college professors to supplement instruction with AI, but bars schools from providing instruction solely via AI. The law protects students and Illinois taxpayers from investing in community college programs that are essentially AI-augmented online textbooks. This state law aligns with a federal rule requiring “regular and substantive” instructor interaction in online programs receiving federal aid—a rule that has been weakly enforced and is unlikely to be prioritized by the current administration, making state protections like the Illinois law even more important.
Research shows that student interactions with instructors play a role in student outcomes. For instance, students in synchronous online courses—where instructors and students interact in real time—have higher retention rates than students in asynchronous, pre-recorded classes. Nullifying state laws that impose limitations on AI instruction would impede states’ ability to protect students and taxpayers from investing in low-quality online programs.
Looking Ahead
A bipartisan group of more than 200 state lawmakers have sent a letter to Congressional leaders urging them to reject the moratorium on state regulation of AI. The state lawmakers argue that the ban undermines the roles of states as “laboratories of democracy” and stifles states’ ability to respond to evolving technological harms. In the absence of federal action to address AI-related risks, they warn, “barring all state and local AI laws until Congress acts threatens to setback policymaking and undermine existing enforcement on these issues.”
The reconciliation bill’s ten-year ban on state AI regulation would severely weaken states’ ability to safeguard students, at a time when AI’s applications in education are booming, yet poorly understood. Lawmakers in the Senate should strip this provision from the bill. States have stepped in to fill the regulatory void and provide common-sense protections for students. Congress should not stand in their way.
Tags: higher education, reconciliation, AI
Reconciliation Bill Would Nullify State Laws Protecting Students from the Risks of AI
Although the Trump administration purports to support returning control over education to the states, a provision buried in the reconciliation bill advancing through Congress would do exactly the opposite: it would impose a sweeping ten-year moratorium on nearly all state laws regulating artificial intelligence (AI)—including those governing AI use by colleges and universities. If passed, the measure would override existing state laws and strip away state power to develop new policies to address the risks posed by fast-evolving AI technology. And, because there are no comprehensive federal laws regulating such uses of AI, the result would be a dangerous regulatory vacuum.
The provision prohibits state and local governments from enforcing any law “limiting, restricting, or otherwise regulating” AI systems used in interstate commerce. The only exceptions are state laws that facilitate AI use. The measure would invalidate a wide range of current and future AI-related regulations, including those designed to protect patients, job seekers, students, consumers, and voters.
Creating a Wild West for AI Companies
Republicans are pushing the AI-regulation moratorium through the budget reconciliation process, which allows passage in the Senate with a simple majority. While the House version includes a blanket preemption of state regulation of AI, the Senate version ties the moratorium to state eligibility for funds allocated to support states’ expansion of broadband access.1
Proponents claim that the ban on state regulation of AI will facilitate development of AI by preventing a patchwork of regulation in different states, ensuring U.S. competitiveness with China. But that argument only holds if a robust federal regulatory framework exists to replace state regulation—and in reality, no comprehensive federal regulation of AI exists. Without federal regulation, the state AI-law ban is an enormous give-away to the corporations that stand to benefit from unregulated use of AI and leaves students and taxpayers at serious risk.
As AI technologies develop, colleges and faculty are exploring how to benefit from new tools, such as AI systems for providing academic advising, tutoring, grading exams, or translating materials. For example, Michigan State University recently launched a pilot program that provides a group of students with access to AI tutoring. Students are increasingly using AI tools in the classroom, and many colleges are considering how best to incorporate AI tools into course curriculums. However, as with any new, transformative technology, AI advances come with attendant risks that require thoughtful and timely responses. States can be nimble in responding to these risks, providing needed guardrails that can fill the gap in federal regulation. States can also serve as testing grounds for new approaches to regulation of AI, and successes and failures in state regulation of AI tools can provide valuable lessons for developing federal policies.
Eliminating Current Protections and Exposing Students to Risks
The reconciliation bill provision would wipe out all existing and future state oversight of AI, leading critics to call the move one of the “largest federal takeovers of state power in U.S. history.” In the education context, the measure would block states from protecting students from an array of AI-related risks. For instance, the ban would prevent states from enacting regulations to increase transparency or address discrimination arising from the use of AI decisionmaking tools in college admissions and financial aid determinations. The ban would also prevent states from imposing limits on the replacement of human instruction and interaction with automated AI tools.
Federal law invests states with significant oversight authority over colleges and universities, tasking states with enforcing state consumer protection laws and helping ensure the value of investment in higher education. The moratorium would hamper states’ abilities to carry out this oversight role.
AI tools are already being used in many colleges’ admissions processes. For example, some admissions officers report using AI tools to review transcripts or recommendation letters, while others report using AI to communicate with applicants. While these tools can help increase efficiency, and even potentially help with reducing bias in the admissions process, the use of AI tools in admissions or financial aid determinations also raises concerns that the tools themselves could be tainted by biases related to sex, race, or other characteristics. At the University of Texas at Austin, the school stopped using an AI tool to evaluate applicants for its Ph.D. program in computer science after the tool, which ranked applicants on their likelihood of being admitted, was disclosed to the public and faced public criticism. Critics of the tool argued that because the algorithm was trained based on profiles of previous applicants who were accepted, the tool might replicate past admission biases against underrepresented groups. The tool, which was reportedly trained to search for keywords such as “best” when evaluating recommendation letters, was also criticized for making it harder for underrepresented applicants to stand out by decreasing the consideration given to recommendation letters in the admissions process. In this case, the school faced opposition after employees publicized the use of the tool after it had been in use for years outside of public view. Without state regulation of AI tool use, colleges will not be required to disclose such use, which would shield AI tool use from public scrutiny.
The reconciliation bill provision banning state regulation of AI would nullify proposed legislation in New York and Washington and existing law in Colorado that impose disclosure requirements and require assessments of potentials for bias in AI decisionmaking tools used in high-stakes decision-making, including college admissions. In the absence of federal regulation of AI use in college admissions or financial aid decisions, it is critically important that states retain the authority to regulate such uses of AI to ensure transparency and fairness.
The reconciliation bill’s moratorium would also preempt state laws aimed at regulating AI in order to ensure education quality. In Illinois, for example, a new law prohibits community colleges from providing courses taught solely by AI. The bill recently passed both houses of the legislature and is awaiting signature by the governor. The law permits community college professors to supplement instruction with AI, but bars schools from providing instruction solely via AI. The law protects students and Illinois taxpayers from investing in community college programs that are essentially AI-augmented online textbooks. This state law aligns with a federal rule requiring “regular and substantive” instructor interaction in online programs receiving federal aid—a rule that has been weakly enforced and is unlikely to be prioritized by the current administration, making state protections like the Illinois law even more important.
Research shows that student interactions with instructors play a role in student outcomes. For instance, students in synchronous online courses—where instructors and students interact in real time—have higher retention rates than students in asynchronous, pre-recorded classes. Nullifying state laws that impose limitations on AI instruction would impede states’ ability to protect students and taxpayers from investing in low-quality online programs.
Looking Ahead
A bipartisan group of more than 200 state lawmakers have sent a letter to Congressional leaders urging them to reject the moratorium on state regulation of AI. The state lawmakers argue that the ban undermines the roles of states as “laboratories of democracy” and stifles states’ ability to respond to evolving technological harms. In the absence of federal action to address AI-related risks, they warn, “barring all state and local AI laws until Congress acts threatens to setback policymaking and undermine existing enforcement on these issues.”
The reconciliation bill’s ten-year ban on state AI regulation would severely weaken states’ ability to safeguard students, at a time when AI’s applications in education are booming, yet poorly understood. Lawmakers in the Senate should strip this provision from the bill. States have stepped in to fill the regulatory void and provide common-sense protections for students. Congress should not stand in their way.
Notes
Tags: higher education, reconciliation, AI