The Proliferation of State Learning Outcomes Assessments in India
There’s a really bad dad joke about European technical standards which goes something like “We love standards; that’s why we create so many of them.” (I told you it was bad.) I feel like that is the way that ed assessments in India are heading. Ideally, there would be a single trusted national sample assessment which provided accurate data on how well states and districts are doing compared to each other and over time. Instead, I fear that we are going to get a patchwork of different state and central assessments.
According to a recent report by MSDF, CSF, EI, CGI, and CSSL, in 2016 27 states and UTs conducted state-level assessments, some sample and some census based. The NEP recommended that all states conduct a census-based state assessment and that a national exam be introduced in grades 3, 5, and 8. The World Bank STARs project has allocated quite a bit of money to increase state assessment capacity in the six states the project will be implemented. And India is still scheduled to participate in the 2021 PISA (though I have no idea if that is even happening). This is going to take up a mind-boggling amount of time and resources. Even the sample assessments tend to have massive sample sizes and any assessment requires dedicated technical resources to come up with items, determine the sampling strategy, and analyze the data.
This would be Ok if at least the data were high quality and usable. Unfortunately, I am skeptical that this will be the case. The first issue with state-level assessments is that it makes it really difficult to compare states. While comparing state performance in a single year isn’t all that helpful (we don’t need an assessment to tell us that HP does better than UP), comparing changes in state performance, especially if the states implement very different policies, can be really helpful. Second, state policymakers often try to use assessment data both to rank schools / blocks / districts which all but guarantees that the data will be unreliable. The MSDF et al report cites the example of Saksham Ghoshna in Haryana which rewarded blocks which achieved 80% student competency on a state-level assessment. Unsurprisingly, in the four years of the project the share of students who had achieved competency level went from 40% to 80%. (Ironically, BCG made similar bold claims about the effect of an ed reform project in Haryana which happened just prior to this one.)
A rigorous, well executed national sample survey on learning outcomes would be far superior to this mishmash of state-level assessments. It would also be far cheaper and take up far less student, teacher, and ed official time. In addition to the reduced duplication of effort, there would likely also be big efficiency gains from a more precise and well thought-out design. For example, you could likely significantly reduce the sample size required just by judicious stratification of schools prior to initial random selection. Unfortunately, the National Achievement Survey (NAS), is not that assessment. I really don’t know what it would take from an institutional perspective for the NAS to improve in quality, but the stakes are high.