The years leading up to the pandemic were full of policy changes for English learners and the educators who serve them. During the Obama administration, many states changed math and literacy standards and tests, which prompted subsequent adjustments to the English language proficiency (ELP) assessments used for measuring English learners’ linguistic development. Then, the 2015 passage of the Every Student Succeeds Act required states to design new plans for how they would measure and weight English learners’ progress. These were approved early in the Trump administration, and began to be implemented in the ensuing years. Many of these plans focused on using the aforementioned new ELP assessments to calculate English learners’ growth in learning English over time. This meant that states would need at least one year of baseline ELP data to begin to calculate ELP growth (and to set student goals against which schools could be held accountable).

In other words, when the pandemic shuttered schools in March 2020, many schools were just settling into something like a new status quo for measuring English learners’ progress. The pandemic interrupted the administration of ELP assessments in some communities, which risks resetting the clock on states’ and schools’ preparations for fully implementing these systems. These disruptions further compound the pandemic’s significant harms to ELs, and carry the additional threat of allowing schools to backslide into old, inequitable patterns of EL treatment.

Experimenting with New Accountability Systems for English Learners

Since the arrival of No Child Left Behind (NCLB) in 2002, states have been required to adopt and administer annual tests designed to measure ELs’ English proficiencies across four domains: speaking, listening, writing, and reading. In a country which has historically failed to provide ELs with the resources and support they deserve, these assessments provided a benchmark of transparency and accountability to incent schools to prioritize these children’s linguistic development. Absent standardized procedures and screeners for identifying ELs and tracking their progress, American schools are prone to overlooking these students.

In 2015, the Every Student Succeeds Act replaced NCLB and maintained the basic structure of this system: schools were required to identify ELs, annually measure their language development, and “reclassify” them as “former ELs” when they reached English proficiency. And while NCLB had maintained a separate school district accountability system specific to ELs, the new law rolled ELs’ progress into a single school-level accountability system—and gave states significant leeway in designing their own versions of that system.

States responded to their new responsibilities to historically marginalized students—including ELs—in a variety of ways. Many used their new flexibility to design “growth-to-target” systems that could address longstanding critiques of NCLB’s largely uniform approach. Specifically, these systems set long-term goals for EL students but allow for some flexibility for each year’s goals. Given that ELs frequently progress rapidly through the first several ELP levels but generally take longer to reach the higher ones, this model could be useful for establishing effective and valid measures of ELP progress. WIDA, a consortium of states using a common ELP assessment, explains the process this way: “English language development occurs over multiple years, is variable and depends on many factors including age, maturation, classroom experiences, programming, motivation and attitude, making it difficult to establish fixed language expectations for any grade level or age.”

For example, Washington, D.C. set the expectation that all ELs in D.C. schools would reach full English proficiency within five years. Its ESSA accountability system uses the first year of scores on the District’s recently adopted ELP assessment (the WIDA ACCESS 2.0) to set interim goals for individual EL students. Critically, students who score higher when they first take the test get fewer years to reach proficiency. In addition, the approach allows yearly growth goals to adjust as an EL progresses through the test’s 1.0–6.0 scale.

Here’s a hypothetical student example outlined in D.C.’s plan:

table 1
Washington, D.C.’s English Proficiency Growth-to-Target Model
ACCESS Year Level Achieved Growth Target Actual Growth Result
#1 2.0 N/A N/A Baseline Set; student has four more years to level 5
#2 4.0 0.8 2.0 Exceeded Target; next year’s growth target will be lower
#3 4.3 0.3 0.3 Met Target; next year’s growth target will be similar
#4 4.4 0.3 0.1 Missed Target; next year’s growth target will be higher
#5 5.0 0.6 0.6 Met Target – Proficient
Source: D.C. ESSA Plan, pp 16-17, https://osse.dc.gov/sites/default/files/dc/sites/osse/page_content/attachments/OSSE%20ESSA%20State%20Plan_%20August%2028_Clean.pdf#page=16

Here’s what this means: A child who scores a 2.0 on her first try taking the ELP test would have four more years to pass the test by scoring a 5.0. Since she has three levels left to go (i.e. 5 – 2 = 3), and four years left, her goal for the next year is to score 0.8 higher on the test (3 levels / 4 years = 0.75 levels per year). But if she scores a 4.0 in her second year—that is, making two levels of growth—her annual growth goal would drop for the third year. Since she has three more chances to pass the test and just one level left to go, her updated goal would be just 0.3 levels of growth for the next year. Note, then, that D.C. schools are then measured against how well they succeed at getting as many students as possible to meet these flexible, individualized interim goals.

On the one hand, these systems represent something like an ideal of EL accountability: they set firm expectations over time, but are also responsive to variable paths that different students may take. Further, in response to several Obama administration grant programs, many states designed these systems atop new assessment systems—updated academic tests in math and literacy with new, aligned ELP assessments.

The trouble is, the implementation of these new pieces required significant time. The U.S. Department of Education approved the last round of ESSA plans in 2017; many states were still gathering data on their new assessment systems in order to develop suitable models for ELs’ linguistic progress. Note that these sorts of adjustments and shifts are continuing. In February, for instance, D.C. reduced its definition of English proficiency down to 4.5 (from 5.0) on its ELP test.

And, of course, states using growth-to-target models could not start measuring schools’ success at helping ELs succeed until they had at least one year of data to serve as a baseline. That meant that ESSA, which passed in 2015, wasn’t going to have fully operational EL accountability systems in many states until the 2018–19 school year (in Connecticut, for instance), or beyond (California anticipated full implementation in 2021–22). Meanwhile, ESSA was technically due for reauthorization after the 2020–21 school year.

Naturally, when the pandemic closed schools in March 2020, these still-new systems hadn’t had enough time to significantly shift how many schools were serving ELs.

Now What?

The pandemic’s disruption of the past three academic years has unwound much of that work. Many states were unable to administer their ELP tests in the early phase of the pandemic—WIDA reported a large drop in the number of ELs tested, for instance. In Washington, D.C., fewer than 60 percent of ELs were able to take the test. In California, home to over 1 million ELs, only about a quarter of EL students took their state’s ELP test during the 2019–20 school year. Unsurprisingly, a scan through Washington, D.C.’s public data on school outcomes reveals messages like these:

Figure 2: Missing Student Achievement Data in Washington, D.C.
Source: dcschoolreportcard.org

Indeed, school-level ELP metrics do not appear to have been updated since the 2019 versions. In October 2020, D.C. requested a waiver from the U.S. Department of Education, writing, “D.C. does not have the data necessary to calculate the…[ELP] indicator.”

These missing test results mean data gaps of varying sizes for states—and their accountability systems. How, for instance, can states using growth-to-target systems set ambitious and appropriate ELP growth targets for ELs who haven’t been tested in one—or more—academic years?

More importantly, all of this isn’t just about having enough data to check official boxes and measure EL accountability for schools. It’s about reviving the theory of action behind ESSA’s accountability systems. If the lack of data means that schools are not rated on their ELs’ linguistic development during the past few years—and particularly if it takes multiple years to rebuild baselines and set growth targets—there is a real risk that these students will be pushed to the margins. Indeed, there is ample evidence that ELs were left out of many pandemic learning models throughout the past two years. Without attention or pressure from public accountability systems, it seems certain that many schools will revert to well-worn inequitable patterns of behavior, allocating resources, energy, and attention away from these students. In other words, ELs will be further marginalized.

There is no easy path back for states to restart ELP accountability. In December, the U.S. Department of Education released an FAQ doc suggesting that states might use pre-pandemic ELP data to set multi-year growth targets for their ELs. While this is a good first step, the Department should outline firmer guidelines for how states should rebuild EL accountability systems to: 1) communicate the urgency of schools equitably focusing on these children’s needs now and 2) restore the rigorous, flexible vision most promised to deliver before the pandemic.