In the five years since the COVID-19 pandemic crashed their unemployment insurance (UI) systems, states have searched for ways to upgrade their UI technologies to provide fast, effective service at scale. At a time of shrinking federal funding, states are increasingly turning to artificial intelligence (AI) to accelerate their ability to process claims, communicate with the public, and flag fraud—and a recent panel of experts convened by The Century Foundation urged states to roll out AI carefully, with proper tools in place.

Deployed thoughtfully, AI is best understood as a power tool—something that enhances productivity and accuracy without replacing human workers, much like a power screwdriver helps but does not replace a construction worker. But even more than other new powerful tools, AI requires careful monitoring to avoid risks and harm to the workers and families served by unemployment insurance and the public servants who operate the program. As AI is being deployed in state UI systems, the decisions we make now will determine whether AI strengthens the safety net or leaves more people behind.

Event Overview

To explore how AI is shaping the future of UI, The Century Foundation hosted a June 10 panel, “Can AI Improve America’s Unemployment Safety Net?” focused on how states are integrating AI into unemployment systems as they seek to deliver lasting improvements to how benefits are delivered and how people are treated throughout the process. As the Trump administration issued its AI Action plan in July that seeks to accelerate the use of AI in government benefits delivery, the conversation during the panel provided critical insights about the risks of bias and unchecked automation, alongside the promise of flexibility and efficiency that AI offers. 

Moderator Michele Evermore, a former deputy director for policy at the U.S. Department of Labor’s Office of UI Modernization and current senior fellow at the National Academy of Social Insurance, introduced the panelists by highlighting their diverse experiences across federal, state, and nonprofit sectors. She welcomed Michael Burke, director of the UI Compensation Bureau at the New Hampshire Department of Employment Security; Julia Dale, chief executive officer at Civilla and former director of the Michigan UI Agency; Amy Perez, policy fellow at the Stanford University RegLab and former staff member at the Colorado UI agency and the U.S. Department of Labor; and Nikki Zeichner, former technology modernization advisor at the U.S. Department of Health and Human Services who also served in the Office of UI Modernization in the U.S. Department of Labor. Over 100 UI stakeholders, including state agency staff, technologists, and advocates, participated in the webinar. Below is a summary of the key points from the panel discussion as the system enters this critical period of experimenting and deploying AI.

Everything Can Look Like a Nail When You Have a Hammer Like AI

Amy Perez, policy fellow at the Stanford University RegLab, started off the discussion by re-articulating a bedrock principle of technology modernization—that states should start by identifying the problem they are seeking to solve, rather than starting with any one software tool. The key is matching AI and other automation technology to real pain points in the system, especially those that emerge during claim spikes. For example, a well-designed adjudication assistant might generate a full timeline of events, flag conflicting information, cite the relevant law, and link to case documents. As states turn to AI for a wide variety of use cases—such as answering questions from claimants completing an application or seeking information, supporting fact finding from claimants and employers, or assisting staff with writing appeals decisions or other forms of correspondence—keeping the ultimate goal in mind should drive procurement and deployment decisions.

Perez also emphasized that success demands ongoing evaluation. She stressed the need for risk assessment, noting that when benefits are at stake, states should apply a higher standard of accuracy. A chatbot might serve more people than a call center, but its accuracy rate must be weighed carefully. States can also start with simpler applications, such as robotic process automation or internal training tools, to ease into AI without high risk. Testing should continue after launch, and states can’t fully outsource testing to vendors. Rather, tools should be built to track their own performance and help agencies target evaluation efforts. Finally, Perez advocated for independent third-party reviews and suggested academic institutions could provide free or low-cost support.

The Role of AI in New Hampshire’s UI System

Michael Burke, director of the Unemployment Insurance Compensation Bureau at the New Hampshire Department of Employment Security, echoed Perez’s themes as he detailed the state’s ambitious plans to roll out eight distinct AI use cases—each designed to enhance and not replace staff. New Hampshire plans to use AI to assist with communicating program information to claimants, as well as performance-enhancing applications such as robotic process automation for compiling appeal records. One key project in testing is an adjudication assistant that collects more accurate information at the start of a claim, allowing the department to process cases more quickly and effectively.

AI Policymaking Is Shifting from Washington to the States

Nikki Zeichner, a former technology modernization advisor at the U.S. Department of Health and at the Department of Labor’s Office of UI Modernization, noted that, during the Biden administration, AI governance emphasized protecting the public from harm through Office of Management and Budget (OMB) guidance, especially through guidance on algorithmic bias and responsible deployment. That approach provided national leadership that centered on risk mitigation and equity as touchstones for AI development. But with the change in administration, the focus has shifted toward competitiveness in the global AI marketplace, raising questions about successful deployment in the public sector. Is it safety and inclusion or speed and innovation? 

Despite the seesawing federal views, states are forging ahead. Many are now drawing from shared frameworks like the NIST Risk Management Framework, which emphasizes human review, impact assessments, notice and appeal, and continuous monitoring. As Zeichner put it, “There’s a ton of movement.… we’re all kind of in this together trying to navigate.”

“It Should Always Be Humans First and Machine Second”

Julia Dale, chief executive officer at Civilla, emphasized that AI should never overshadow the human experience, highlighting that benefit systems must be designed to uphold dignity, trust, and transparency. If a benefits technology system makes a claimant feel unseen or disrespected, it fails at its core purpose. Dale stressed that every AI tool must be evaluated for its racial, geographic, ability-based, and linguistic impact early in the design process. This requires applying human-centered design use cases, involving not just administrators but also community voices in shaping how tools are developed, tested, and deployed, including everything from the look and feel of AI tools to hard analyses of difference in model output by claimant characteristic during testing. AI should not reinforce or exacerbate existing racial and gender disparities in UI receipt. 

Safe Building Principles

Panelists suggested ways states can put safety and privacy first. Perez explained that the systems adopted by states are closed models. They are trained only on the policies, data, and regulations that each state explicitly selects. These models are built within secure environments, often protected by firewalls, and do not send information back to outside providers. New Hampshire went further, locking down further machine learning once a model enters production and goes live. This ensures predictability, and avoids the tool becoming a “black box” that changes outputs in ways that could be out of step with agency policy. Additionally, AI models needed to be regularly updated in step with policy shifts and relying on statistically sound sampling to ensure quality.

Considering AI’s Role in Fraud Detection

Dale recalled drastic results that cost thousands of Michigan residents unemployment insurance benefits when the state automated fraud detection in 2015 through its MIDAS system. Burke recommended using AI to help identify potentially fraudulent cases for further investigations and to run historical data through predictive models to identify overlooked fraud trends. Perez and Zelchner built on this theme by showing how AI is especially well suited for managing massive amounts of data and identifying patterns such as repeated occupations or reused passwords across claims. However, there remain acute risks if AI is used for automated fraud detection in ways that could bring us back to use cases that have not been conceived since the Michigan experience and subsequent litigation discouraged states from using automated fraud detection. 

Looking Forward on AI and Unemployment Insurance

While states are experimenting with AI now, the real test will come the next time that claims surge and states seek to deploy AI to optimize processes at scale. Perez added that states must resist the urge to pull back from oversight, moving quality assurance and monitoring teams into frontline crisis work, even under pressure. Cutting corners risks widespread errors.

As states continue to explore the role of AI in unemployment systems, panelists agreed that the goal should not be to automate for speed alone. Instead, the focus should be on building public systems that are more equitable, transparent, and centered on the needs of people. This does not mean that there need to be overly rigid rules for AI development, but rather policies should emphasize the importance of consent, accuracy, and human review. In the case of UI, this also means modernizing guidance to distinguish which tasks can be handled through automation, and which require human judgment. Without these safeguards in mind, states risk eroding public trust in the very systems they seek to improve.

The takeaway message is that leaders should not replicate existing barriers or systemic biases within AI tools but instead to imagine new ways the government can serve people fairly. This will require policymakers to remain close to the development process and engage directly with those building and deploying AI tools. The path forward calls for public systems built with care, transparency, and the courage to put people first.