No Child Left Behind at Five
One of the toughest parts of NCLB for local school districts is creating an accountability plan that works. While school districts have made progress in accountability over the last five years, they have also encountered great obstacles, outlined here.
In 2006, states continued to make adjustments to the accountability plans they must have in place for the No Child Left Behind Act (NCLB). These plans are important because they lay out the specific policies each state uses to determine whether schools and districts have made adequate yearly progress (AYP) in raising student achievement, to identify which schools and districts are in need of improvement, and to carry out all other NCLB accountability provisions.
Since its enactment in 2002, NCLB has spurred massive changes in and expansion of state testing programs, as states sought to meet a 2006 deadline for testing students annually in reading and mathematics in all of the grades 3 through 8, plus once during high school. Implementing these far-reaching test requirements has taken many states four years; as of 2006, some states were still changing or adding grades to their testing programs to comply.
The Center on Education Policy (CEP) reviewed the changes to state accountability plans approved by the U.S. Department of Education (ED) during 2006. Two main findings emerged from our review:
- Many of the changes that states requested to their accountability plans this past year were related to meeting the deadline for implementing tests that fulfilled NCLB requirements by the end of school year 2005-06. The introduction of new testing programs or tests at additional grades meant that many states had to set new cut scores for proficient performance and revise their targets for adequate yearly progress. These changes were one reason why many states did not release information about the AYP status of schools and districts until well after the start of school year 2006-07. These delays created uncertainty about which schools and districts would have to undertake the improvement steps and interventions required by NCLB.
- The U.S. Department of Education continued in 2006 to approve changes to state accountability plans that in effect make it easier for schools and districts to demonstrate AYP. These changes include, among others, the adoption of confidence intervals, indexing systems, and more lenient policies for counting scores from retests. But the flexibility permitted by ED in 2006 does not break new ground. Rather, more states are copying changes that ED had already allowed in other states or are applying the adjustments and flexibility described in policy guidance issued by Education Secretary Margaret Spellings in 2005.
CEP's Review of State Plans
As part of a comprehensive, multiyear study of federal, state, and local implementation of NCLB, the Center on Education Policy monitors changes to state accountability plans. This report analyzes the major changes to state plans approved during 2006 and serves as an update to our previous reports on the same topic (CEP, 2004; 2005). The information in this report is based on decision letters to states from the U.S. Department of Education that were posted on ED's Web site between January 1 and December 31, 2006. The decision letters report those changes that were approved by ED, but do not include any information about state requests for changes that ED denied.
States that wish to change their accountability plans must submit their proposed changes to ED for approval. State changes can be submitted at any time, so ED is continually considering state changes and making public its decision letters. Because the approval process is ongoing, this report represents a snapshot in time.
Changes Related to New or Additional Tests
One of the largest impacts of NCLB on the U.S. educational system is the massive change it has produced in state testing systems. By the end of school year 2005-06, states were required to administer assessments in reading/language arts and mathematics every year to all students in grades 3 through 8 and once during high school. (In addition, science tests at three grade levels must be in place by 2007-08.) Tests must be aligned to challenging state content standards and meet a wide variety of other criteria. To comply with the 2005-06 deadline, many states introduced new tests during the past school year. Creating a new test takes several years of research and development, so it is not surprising that it took about four years since enactment of the law for many states to reach this point. In fact, a few states have not met the deadline; Hawaii, for example, will not administer an assessment program that meets current NCLB requirements until 2006-07.
Since 2004, about half the states have had to make changes to their accountability plans or ask ED for extensions of certain NCLB deadlines as a result of their introduction of new tests or tests in additional grades. Many other states have expanded their state testing systems during the past few years without having to change their accountability plans. To be sure, the impetus behind some of these changes to testing systems was not always NCLB; some changes were underway before the law passed. However, the fact that nearly all states are now testing every year in grades 3-8 and once in high school is attributable to NCLB. In addition, most states have undergone a peer review, required by ED to ensure that their tests are aligned with standards and meet other technical requirements.
As a result of these federal requirements, state testing systems have become more similar in the grade levels and subjects tested and in the annual frequency of testing. They have also become more similar in purpose, with all states now using tests to hold schools and districts accountable for student progress rather than just to see how students are doing. But state testing systems still vary substantially in most key respects, such as the material tested, the difficulty of the tests, the rigor of the content standards, the cut scores that define proficient performance, and the ambitiousness of their trajectories for reaching 100% proficiency by 2014.
Delays in AYP Determinations
The introduction of new tests affected many states' timelines for releasing AYP results and identifying schools and districts for improvement in 2006. Although NCLB requires states to release AYP data before the start of the school year, many states needed extra time to gather the data necessary for standard setting-the process used to determine the cut scores that define various levels of performance on the test. Most relevant to NCLB is the determination of the score a student must earn on a test to reach the "proficient" level. Typically, the standard setting process involves judgments by committees of experts; to inform their judgments, the committees are given examples of test items as well as data on how students performed on various test items. Therefore, standard setting cannot occur until after a test is administered and test data are analyzed. Usually the standard setting committee then recommends cut scores to state officials (often the members of the state board of education) who then decide on the final cut scores. The entire process can take several months.
As a result of implementing new tests, some states also had to reset their annual measurable objectives (AMOs)-the state targets for the percentage of students that must score at the proficient level or above in a given year for a school or district to make AYP. If a new test is easier or harder than the one it replaced, the state may have to rethink the trajectory of targets for bringing 100% of students up to the proficient level by 2014, as required by NCLB.
These changes are one reason why many states did not release AYP results based on 2005-06 testing until after the beginning of school year 2006-07-in some cases, weeks after. These delays created uncertainty at the local level and complicated local decisions about whether to offer public school choice, supplemental educational services, and other interventions required by NCLB.
Specific State Changes Related to New or Additional Tests
Because they were implementing new testing programs or tests at new grades, at least 17 states received permission from ED in 2006 to use an extended timeline for the release of test results and AYP determinations, or make other adjustments, such as adjustments to their indexing systems or methods for averaging test results across grades. (Table 2 at the end of this report shows which states have received permission to use extended timelines or other changes due to new tests.) These changes in state testing systems affected the process for determining AYP based on 2005-06 test results. Generally, states needed the extra time for standard setting activities.
At least two more states, Connecticut and South Carolina, asked for extensions of deadlines for determining AYP because of problems with their testing contractors rather than the adoption of new tests. In Connecticut, AYP results were released late because Harcourt, which develops and scores the state's high school tests, notified the state of scoring problems this past spring. South Carolina notified ED that it would not receive test data from its contractor until July and could not make final AYP determinations until September 30; ED approved the delay for 2006 only and urged the state to get the data back from its contractor sooner. Press reports indicate that Illinois also had delays because of various problems with the administration of the test, including scoring errors and problems with the state's new student tracking system (Cole, 2006).
At least 17 states also made changes to their AMOs in 2006, usually in conjunction with the introduction of new tests (see table 2); only two of these states requested changes to their AMOs but not to their testing programs. Kansas, for example, administered a new testing program in 2006 at all grades required by NCLB, using tests aligned to the state's revised content standards. The state needed to have the 2006 test data in hand to conduct standard setting activities over the late summer and early fall. After that process was completed, the state board formally adopted cut scores for proficiency and new AMOs. Only after the proficiency levels and AMOs had been set could the state determine the AYP status of schools and districts. It was not possible to accomplish all of this before the beginning of the new school year, so the state requested more time. At the end of October 2006, Kansas was just getting ready to release its 2006 AYP results.
Many states that did not create entirely new testing programs still made changes in their accountability plans in 2006 related to the testing of additional grade levels. Louisiana, for instance, added new reading and math tests in grades 3, 5, 6, 7, and 9, and changed its methods for calculating school performance scores. The state used to administer the "off-the-shelf," norm-referenced Iowa tests at these grades, but discontinued this practice. Adding grades to an existing testing program is not as easy as it may seem. Although certain characteristics of tests are the same across grades, Louisiana still had to develop 10 more tests-a reading test and a math test for five grade levels. Because it took time to set the proficiency cut scores, the AYP results were not ready before the start of school year 2006-07. Arizona added testing in additional grades, but asked ED for an extension to report AYP results. The state wanted more time to implement a student identification system that would allow it to better track student test performance and determine graduation rates more accurately. (Many states have adopted these identification systems to better meet the data demands of NCLB.)
New York previously tested reading and mathematics only in grades 4 and 8 and in high school but expanded its testing program to include all the grades required by NCLB. At the same time, the state chose to update the content of its tests at other grade levels, including those previously given at grades 4 and 8. The mathematics test, in particular, was made more difficult. In 2004 the state Board of Regents appointed a Math Standards Committee, consisting of math teachers and other experts statewide, to revise the math learning standards. The committee's recommendations, adopted by the Board of Regents in 2005, introduced more advanced math content into the lower grades in elementary and middle school. For example, some algebra and geometry content that had been included in the learning standards for grades 5 through 8 was moved to grade 4, and some algebra and geometry content that had been included in high school mathematics was moved to grade 8. The state requested a timeline extension so that proficiency cut scores could be determined in fall 2006, to be followed by the revision of the state's AMOs. As of the end of October 2006, New York still had not released AYP results for elementary and middle schools but was scheduled to do so in November.
To meet NCLB requirements, Minnesota administered a new testing program in 2006 aligned to the state's new content standards. The state also updated its AMOs. Table 1 compares the mathematics AMOs for the state's previous test (the MCA) and its new test (the MCA-II). For simplicity's sake, the table displays only three grades, but the state has set new AMOs for all the grades 3-8 and high school.
Click to view Table 1
Note: Minnesota's AMOs are based on an index system rather than a simple percentage of students scoring proficient. Minnesota's index gives schools partial credit for students who fall below the proficient level. One point is awarded for students who score at or above proficient; one-half index point is awarded for students who score at the "partially meets standards" level; and zero points are given for students who "do not meet standards."
Source: Minnesota Department of Education, 2004, 2006.
Minnesota's grade 11 trajectory has shifted considerably: the new AMO for 2006 was 28.1, much lower than the old AMO of 79.3. It appears that either the state's new grade 11 mathematics test is considerably more difficult than the previous one or the state has taken advantage of the change in test to set AMOs that are more realistic in terms of students' actual achievement, thus keeping down the number of schools identified for improvement. (This may be the case for other states as well.) A noticeable pattern is that Minnesota's trajectories shifted more at the higher grade levels. In addition, under the new trajectory, the AMO does not rise between 2006 and 2007, perhaps to allow the educational system time to adjust to the new tests.
Michigan tested additional grade levels in 2005-06. The state asked ED's permission to continue using the AMOs for the grades previously assessed, but to implement new AMOs for the newly assessed grades. Other states took a different tack; they introduced tests at new grades while keeping the old AMOs for a year. Missouri, which introduced tests at new grades in 2005-06, asked to use the existing grade span AMOs for its 2005-06 AYP determinations but to recalculate the AMOs based on the new assessment system and implement the new AMOs in 2006-07.
ED Approval of State Changes
ED approved state requests for changes and late releases on a case-by-case basis, and urged states to publicize AYP results no later than October 31-a deadline that not all states met. ED also stipulated that schools and districts identified for improvement last school year must continue in that category until the new data become available, with the sanctions continuing to apply. For example, schools would need to continue offering public school choice and supplemental educational services if they did so in 2005-06. Schools in improvement status also were supposed to prepare for the next stage of consequences specified by the law, including mandatory restructuring. When the states released their AYP results, schools could theoretically leave improvement status if they met the conditions for exiting. But in reality, it would be very difficult for a district to stop offering school choice to a student who changed schools at the start of the year, so the district would probably have to continue offering choice through school year 2006-07.
Lack of Comparability
The many changes in testing programs spurred by NCLB have made test results within the same state less comparable across years. This lack of comparability makes it more difficult to determine whether student achievement has increased since the inception of NCLB within a state or the nation as a whole. If a change in tests occurs, then student scores on the new test cannot be compared with scores on the old test, unless a complex process called "equating" is undertaken by assessment experts. When researchers and policymakers try to draw conclusions about changes in achievement since NCLB took effect, they must take care to consider whether the test results within a state can be compared from year to year.
CEP is currently undertaking a major study of state assessment data to determine whether the main goals of NCLB are being met-that is, whether student achievement has increased and achievement gaps have narrowed since the law was enacted in 2002. The first phase of the study involves a review of each state's testing program to determine if test results are comparable from year to year or if there has been a break in the testing program, such as the introduction of a new test or new proficiency cut scores. The findings of this study will be released in early 2007.
Other State Policy Changes Affecting AYP
As in past years, states continued to request changes in 2006 that are likely to result in more schools and districts making AYP. These changes include the use of statistical techniques such as indexing and confidence intervals; more liberal policies for counting scores from retests; modifications in testing requirements for students with disabilities; and higher minimum sizes for student subgroups, among other changes. The major categories of changes are described below and summarized in table 2 at the end of this report.
NCLB has been criticized for looking at student achievement solely in terms of "proficient" or "not proficient" and ignoring improvements among students who score below the proficient level. But in fact, ED has allowed states to use indexing systems that give schools and districts credit for gains below the proficient level, such as an increase from year to year in the number of students scoring at the "basic" level instead of the "below basic" level. ED has approved these indexing plans as long as the scores of advanced students are not used to compensate for the performance of students who score below the proficient level.
In 2006, two states adopted indexing systems. This is in addition to the eight states that were allowed to use indexes in their original accountability plans and the six states that received permission to do so in 2004 and 2005. In addition, some states, such as Vermont and New York, received permission in 2006 to make adjustments to their indexing systems to allow for the introduction of new tests.
A confidence interval is a statistical technique, somewhat like a margin of error in an opinion poll, that takes into account natural fluctuations in test scores due to sampling error and other factors unrelated to student learning. Confidence intervals create a window of plus or minus a few points around the percentage of students in a school or subgroup that scores proficient. In effect, this makes it easier for schools and districts to make AYP. In addition, when confidence intervals are used, small subgroups within a school or district are treated with more leniency than large subgroups or all students. (For a more complete explanation of confidence intervals, see CEP, 2005.)
Seven more states added or made changes in the use of confidence intervals in 2006, in addition to the 24 states that adopted confidence intervals in 2004 and 2005 and the many states that had included them in their original accountability plans. This means that virtually all states now use confidence intervals in some form. Most common are 95% or 99% confidence intervals for AYP calculations, and a 75% confidence interval in "safe harbor" situations. (The safe harbor provision of NCLB allows a school or subgroup to make AYP even if it falls short of state proficiency targets, as long as the number of students who are not proficient is reduced by 10% from the previous year.)
Many states use their high school exit exams-tests that students must pass to receive a high school diploma-to determine AYP for high schools under NCLB. Because exit exams have high stakes for students, states allow students multiple attempts to pass different versions of the same test before the end of their senior year. Initially, ED preferred the "first administration" rule for tests used for NCLB-the score that counted was the one a student earned the first time the test was taken. Beginning in 2004, however, the Department allowed more leeway on the retest policy for states with exit exams, then later extended this leeway to several other states. Under the revised policy, scores on retests can count for AYP purposes, and the scores of students who pass the exams early can be "banked" to a subsequent year. In Maryland, for example, if an 8th grader passes a high school level algebra test, the passing score is "banked" until the student enters high school. In New York, a student's highest score on the state's exit exam is used to determine AYP at the high school level. In 2006, five additional states were permitted to count scores from retests for AYP purposes.
Students with Disabilities
NCLB requires all students with disabilities to take the same state reading/language arts and math tests as other students in their grade, with some accommodations and exceptions. But experience has shown that it is very difficult for the subgroup of students with disabilities to make AYP on regular state tests. ED has responded by allowing states to test students with disabilities against "alternate," and then "modified," standards.
In 2003, ED issued regulations that allowed states to give students with significant cognitive disabilities an alternate assessment geared to their learning level (alternate standards) rather than their grade level. However, the number of students for whom proficient or higher scores on these alternate assessments can be considered proficient scores for AYP purposes was limited to 1% of all tested students.
A major policy change announced in April 2005 expanded the opportunities for students with disabilities to take alternate assessments by allowing additional students to be tested against "modified standards." Students targeted by the modified standards policy are generally achieving at higher levels than the students with significant cognitive disabilities targeted by the alternate standards, but their academic performance is not strong enough to meet grade-level standards. Although they may eventually make significant progress toward grade-level standards, they often need more time and help than other students. For AYP purposes, the number of students with proficient or higher scores on alternate assessments based on modified standards cannot exceed 2% of all students tested in the state.
ED offered states two options for testing these students with modified standards. In 2006, four additional states adopted Option I, which allows states without a modified assessment to convert nonproficient scores on regular state assessments to proficient scores for a number of students with disabilities equivalent to 2% of the total students in the state. This brings to 28 the number of states using Option I. In 2006, five states received permission to use Option II, which allows them to count scores from alternate assessments based on modified standards in their AYP calculations, up to the 2% cap. Michigan was the only state that requested to use Option II in 2005, but it subsequently switched to Option I in 2006. Option II is available only to states that had already developed modified achievement standards and had administered alternate assessments based on those standards for at least two years prior to the 2004-05 school year. The new interest in Option II suggests that more states are developing tests specifically for students with disabilities based on modified achievement standards. States were also invited to submit slight variations on these methods (referred to as Option III), with a 2% cap remaining, but only Colorado, Maryland, and Massachusetts did so.
Minimum Subgroup Size and Hurricane Katrina Subgroup Policy
To make AYP, schools and districts must meet achievement targets for each subgroup of students of significant size. The subgroups considered for NCLB accountability include major racial-ethnic subgroups, such as African-American, Latino, and Asian students (with some variations among states); low-income students; students with disabilities; and English language learners. To ensure that subgroups are large enough for changes in aggregated test scores to be valid indicators of group progress, states set minimum sizes for subgroups to count for AYP purposes. Subgroups below the minimum do not get counted for AYP, so if the minimum size is larger, it becomes easier for schools to make AYP.
Previously, the trend among states was away from a single minimum size and toward larger subgroup sizes, different sizes for different subgroups and/or purposes, and the use of formulas for determining subgroup sizes. The trend seems to have stopped: after 13 states increased their minimum subgroup sizes in 2004 and 10 more did so in 2005, only 4 states changed their minimum subgroup sizes in 2006. Instead of adopting formulas or differentiating among subgroups, Alaska and Vermont moved to a uniform minimum subgroup size of 40 students and Kansas and Puerto Rico to a uniform minimum of 30 students. This may be because Kansas and Puerto Rico have developed assessments for students with disabilities based on modified standards, as described above. ED has made clear it will not approve these types of assessments if the state has a different subgroup size for students with disabilities.
Another development related to subgroups was Secretary Spellings' decision to allow some flexibility to states hosting large numbers of students displaced by the Hurricane Katrina disaster (Spellings, 2005a). States could request a waiver from ED allowing displaced students to be placed in a separate subgroup for reporting and accountability purposes rather than being included in existing subgroups. Once test results were available for 2006, ED and the affected states would make decisions about school and district AYP based on how the displaced students performed. The performance of the subgroup still had to be reported. Seven states (Alabama, Arkansas, Georgia, Louisiana, Mississippi, Tennessee, and Texas) took advantage of this temporary policy change.
NCLB requires 95% of the students in every school and every subgroup within a school to take each subject test required by the Act. If this test participation requirement is not met, the school cannot make AYP even if its test scores meet state targets. In March 2004, the Department relaxed this requirement a bit, allowing states to average their participation rates over two or three years, so that a 94% participation rate one year could be balanced by a 96% participation rate the following or previous year. In 2005, six states changed their accountability plans to incorporate this new policy, in addition to the 32 that did so in 2004. In 2006, only Puerto Rico adopted the practice.
States can also average over two or three years (the latest year plus one or two preceding years) their percentage of students scoring at the proficient level, a practice referred to as "uniform averaging." This year, eight states either adopted the practice for the first time or made adjustments in their averaging policy related to the adoption of tests in new grades.
Extra Time for Some Students to Graduate
Under NCLB, graduation rates are factored into AYP determinations for high schools. In 2006, two states received approval from ED to count students with disabilities and/or English language learners as graduating on time even if they took extra years to graduate. This change allows students with disabilities to be counted as graduating on time if their individualized education plans call for extra years of high school. English language learners can be counted as graduating on time if it takes five years to complete high school instead of four, or as determined on a case-by-case basis. Previously, 14 states had already been allowed to adopt this policy.
Identifying Districts for Improvement
Nine states adopted a policy in 2006 to allow a school district to be identified as in need of improvement only when it does not make AYP in the same subject and across all three grade spans (elementary, middle, and high school) for two consecutive years. This "grade span" method makes it considerably easier for districts to avoid improvement status. The addition of 9 states in 2006 brings the total number of states using the grade span method to 35.
Changes by State
Table 2 shows changes to states' original accountability plans made in 2004, 2005, and 2006, as documented by decision letters posted on the ED Web site. It is not a summary of state accountability plans originally approved in the spring and summer of 2003; for instance, the states listed in the "confidence intervals" column do not include those that had requested to use confidence intervals in their original accountability plans.
Click to view Table 2 (part 1)
Click to view Table 2 (part 2)
Table reads: In 2004, Oregon started using scores from retests to calculate AYP. In 2005, Oregon adopted ED's Option I for testing students with disabilities using modified standards and changed its process for identifying districts for improvement. In 2006, Oregon changed its annual measurable objectives for student progress. This same year, Oregon also expanded its testing program, which resulted in delays or other changes to its accountability plan.
Note: State changes posted on the ED Web site in 2006 (as of December 31, 2006) are shown as "06." State changes posted in 2005 are shown as "05," etc. The specific states listed for each category are not final and may change as states and the Department release additional information.
Note: SWDs = students with disabilities; AMOs = annual measurable objectives.
*Illinois also experienced delays in test data due to contractor problems, but this problem was not reflected in ED approval letters issued as of December 31, 2006.
Source: Center on Education Policy, based on ED decision letters, http://www.ed.gov/admins/lead/account/letters/index.html.
School year 2005-06 was the deadline for states to implement tests in all of the grades required by NCLB. This has resulted in massive changes in and expansion of state testing systems. For many states, adding tests in new grades and changing testing systems took a full four years to accomplish and delayed the release of AYP determinations in 2006. But even though nearly all states have testing systems that match NCLB's requirements, they cannot relax because the law requires states to introduce science tests by 2007-08.
Over the past year, ED continued to grant states a variety of changes that make it easier for schools and districts to make AYP. But these changes broke little new ground in terms of new areas of flexibility from ED. Most states simply asked for changes already granted to other states or adopted policies that Secretary Spellings explicitly allowed in her policy letters and her 2005 "roadmap" document explaining the types of flexibility available under current law (Spellings, 2005a; 2005b). Therefore, the process of requesting and making changes to state accountability plans has become more predictable, as the parameters for what is and is not allowed have become clearer.
This report was researched and written by Naomi Chudowsky and Victor Chudowsky, CEP consultants. Nancy Kober, CEP consultant, edited the report. Jack Jennings, CEP's president and CEO, and Diane Stark Rentner, CEP's director of national programs, provided advice and assistance.
Based in Washington, D.C., and founded in January 1995 by Jack Jennings, the Center on Education Policy is a national independent advocate for public education and for more effective public schools. The Center works to help Americans better understand the role of public education in a democracy and the need to improve the academic quality of public schools. We do not represent any special interests. Instead, we help citizens make sense of the conflicting opinions and perceptions about public education and create the conditions that will lead to better public schools.
The Center on Education Policy receives nearly all of its funding from charitable foundations. We are grateful to The Joyce Foundation, The Ewing Marion Kauffman Foundation, and The Carnegie Corporation, for their support of our work on the No Child Left Behind Act. The George Gund Foundation, The MacArthur Foundation, and the Phi Delta Kappa International Foundation also provide the Center with general support funding that assisted us in this endeavor. The statements made and views expressed are solely the responsibility of the Center.
© Center on Education Policy January 2007
Click the "References" link above to hide these references.
Center on Education Policy. (2004). Rule changes could help more schools meet test score targets for the No Child Left Behind Act. Washington, DC: Author.
Center on Education Policy. (2005). States test limits of federal AYP flexibility. Washington, DC: Author.
Cole, W. (2006, November 29). Some children left behind [Electronic version]. Time.
Minnesota Department of Education. (2004). Minnesota consolidated state application accountability workbook. Amended November, 2004. Retrieved on November 14, 2006, from http://www.pbs.org/newshour/bb/education/nclb/map/aypplan/mn.pdf#search='minnesota%20state%20accountability%20plan'.
Minnesota Department of Education. (2006). NCLB adequate yearly progress (AYP) business rules. Retrieved on January 3, 2007, from http://children.state.mn.us/mdeprod/groups/NCLB/documents/Manual/010590.pdf.
Spellings, M. (2005a, September 29). Letter to chief state school officers. Retrieved on November 15, 2006, from http://www.ed.gov/policy/elsec/guid/secletter/050929.html.
Spellings, M. (2005b, November 10). No Child Left Behind: A roadmap to state implementation. Retrieved on November 15, 2005, from http://www.ed.gov/admins/lead/account/roadmap/index.html.
No Child Left Behind at Five: A Review of Changes to State Accountability Plans. (2007) Center on Education Policy: Washington DC.