What do you think of SchoolSparrow? Part 1

If you’ve followed the debate about GreatSchools.org ratings, you might have also heard about SchoolSparrow.com. Positioned as an equity-oriented alternative school rating site, SchoolSparrow started about ten years ago in Chicagoland, and it went national in 2021. On its website, you can search for your own school/the schools in your town, and you can read about SchoolSparrow’s suggestions for realtors and resources for public conversation about school quality.

As you’ll see in this two-post series, I have genuinely mixed feelings. This first post is my attempt to outline those thoughts/critiques- before the site went national, I shared a slightly longer version of this with SchoolSparrow’s founder, Tom Brown. The second post is Tom’s response to me, shared with permission and slightly edited. 

My conversation with Tom is a snippet of an ongoing debate about how to address a big problem in contemporary education policy- the relationship between test-based school accountability and school/residential segregation. As I’ve written in an earlier post, this is my day job. I work for a consortium of Massachusetts public school districts that are piloting an alternative to test-based school measurement. Our school quality framework is designed to measure things that can’t be measured on tests. It relies on school-level administrative data (e.g., ratio of mental health counselors to students) as well as just hearing directly from actual people in each school, via student and teacher surveys

As you’ll see across each post, Tom and I approach school quality measurement differently, both in the strictly technical aspect of the measurement itself and also in how we talk about the issue more generally. That said, I think we both recognize that the other is trying to do something that is new and difficult, and we both come to the work because we believe it has the potential to dislodge a major linchpin of inequity in public education. 

Especially from that perspective, I appreciate Tom’s dedication to this work. In addition to giving permission to share our conversation, he’s been very genuine about accepting feedback and improving SchoolSparrow. As part of the email conversations that led to these posts, Tom was very clear that his “ratings will continually evolve until we’ve moved away from test scores and one rating for every school,” and that he’s interested in feedback about how to get there. 

As the title says, I want to invite readers into this conversation as well. Tom and I both know we haven’t fully solved this issue, and that no measurement system – regardless of technical sophistication – is going to work if people aren’t bought into it. And, we’re both parents, too; so, we get it from that perspective. My oldest is going to start kindergarten next year. This is where I’m living right now, both professionally and personally. If it’s similar for you, let us know- feel free to use comments here or reach out to me or Tom on twitter

OK, so, onto SchoolSparrow and our email conversation. Tom recently published a short piece in Poverty & Race, the newsletter for the Poverty & Race Research Action Council. (Side note: it’s a special edition of the newsletter, and it’s outstanding- features leading thinkers/researchers on a wide variety of issues related to school and residential segregation, including several who have posted on this very blog in the past. Here’s the full PDF– highly recommend.) Tom’s piece has a good summary of the differences between GreatSchools and SchoolSparrow. Here are a few highlights:

  • In its most recent version, GreatSchools ratings are based on average test scores (weighted 30%), test score growth (30%) and an “equity score” (40%). The “equity score” is essentially a measure of the so-called achievement gap- it compares the average test scores of children from economically advantaged families with the scores of children from low-income families. These sources boil down to a singular score on a 1-10 scale. 
    • If you’re not familiar, there’s a lot of great critique of GreatSchools. I highly recommend this piece from an Integrated Schools organizer, as well as this Integrated Schools podcast episode, and this Mother Jones article that also features MCIEA (where I work). Notably, this paper evaluated the impact of GreatSchools ratings over nearly ten years, and found that the availability of test-score data led to gaps in housing values and fueled racial segregation in schools. 
    • A very recent study, summarized in Chalkbeat, essentially confirmed the arguments of GreatSchools’ critics- it used an extensive dataset and sophisticated statistical measurement tools to demonstrate that “the fact that schools with more white students are highly rated reflects selection bias rather than educational quality.” Or, ratings are reflecting demographics, not actual school quality. 
  • As Tom describes, SchoolSparrow’s rating system “calculates the average expected score on the Reading Language Arts (RLA) section of the standardized test based on control variables such as the percentage of children considered economically disadvantaged (ECD) that took the test and the percentage of children classified as having a disability (CWD) that took the test.” So, it creates a predicted score based on economic status and disability status. If a school overperforms its predicted score, then it gets a higher rating on the SchoolSparrow site. You can read more about the algorithm here, which has been endorsed by external review. 
  • Like its competitor, SchoolSparrow also rates schools according to a singular score on a 1-10 scale, though, as noted above, Tom plans to move away from that system as he improves his formula. In an email to me, he also noted that he’s considering only publishing ratings for schools that score above the 60% percentile. In this approach, which has not yet been implemented, schools below the 60% percentile would be listed as <=7, to avoid calling out any schools as “bad” schools. Then, “schools that are in the 60-73% percentile are an 8, 74%-87% are 9/10 and 88%+ are 10/10.”
  • Along those lines, SchoolSparrow may include non-test-based forms of measurement in the future, such as parent satisfaction surveys. As Tom says in Poverty & Race, “School Sparrow aims to build a data set and allow users to create their own customized rating system based on what they believe is important when selecting a school. One parent might weigh teacher quality measures at 50% and parent satisfaction at 50%, and we’ll show school ratings in that context.”

Jack Schneider, co-founder of MCIEA, also has an article in the PRRAC newsletter, and there’s overlap in our critique. For example, he argues that because “school quality…is a multidimensional construct,” the use of test score data “does not align with what it purports to describe.” Basically: tests can’t capture all the wonderful, complicated things schools do. 

SchoolSparrow is undoubtedly an improvement on GreatSchools. My main concern is that, as long as it relies on test scores and boils schools down to a single rating, it runs the risk of replicating the problems associated with test-based school rating systems of all kinds. Here’s what I said to Tom, back in May of 2021:

My main point of dissonance with SchoolSparrow is that it is still a rating system. As a result, it plays into a popular narrative that some schools are “good” and some are “bad.” There absolutely is value in pushing public understanding on this, and separating school quality from out-of-school factors. However, the underlying assumption is still in place- there’s a scarcity of “good” schools, therefore parents should move to the right neighborhoods, pull the right strings etc to “get” those schools for their kids. By contrast, our underlying principle at MCIEA is that all schools can and should be “good” schools. For example, we don’t rate schools against each other, but instead rate schools against an external measure (often based on recommendations from research or professional associations) of what makes a “good” school. 

There’s a few related issues that flow out from the critique above:

  • White parent colonization – I worry that the overarching setup of your site might still play into some of the problems associated with the “hidden gem” concept- white parents find out about a school that others believe is a “bad” school, several families enroll their kids, then quickly take over the culture of the school in colonizing ways (e.g., without forming meaningful partnerships with the families who have been there longer). Of course, the Nice White Parents podcast is a good illustration of this kind of thing. And, conversely, the Integrated Schools org is deeply thoughtful about discussing how white parents can partner in ways that are not colonizing. ICYMI, they just did a great webinar on this very topic. 
  • Reliance on test scores – Even if it’s a much better use of test scores, I still have trouble with a rating system that is based on test scores. Of course, tests measure only a narrow range of academic skills and they can’t tell us anything about early grades (PK-3) when so much important learning is happening. I worry that some schools rated high on your system might still be missing learning experiences that are important, but aren’t easily captured in tests, like whether students have opportunities to work in groups with peers or whether their school builds skills around civic participation. A few of my colleagues explored this in a recent paper– they looked at what would happen if 25% of a school’s rating was based on school climate and found that schools serving students of color would see their ratings increase (in some cases) by 9 or more percentile points (see table on page 9).
  • Lack of attention to race/racism – I think that, in our highly segregated society, race matters to parents in a way that is separate from school quality. So, I’m cautious about a system that aims to shift the ground under racial segregation without explicitly talking about race. This study, for example, found that for every 1% increase in the black student population at a school, the likelihood of white student enrollment decreased by 1.7%. Overall, they argue that “racial bias is in fact responsive to the racial composition of schools rather than to other school characteristics” (p. 110). This tells me that there’s something going on here that isn’t even connected to school ratings, and that fixing the overarching issue requires direct confrontation of this ugly underlying issue. 

Later this week, I’ll post Tom’s reply to me. In it, he says that “the academic research community seems to let the perfect be the enemy of the good.” I hear this critique, though I might be guilty of it myself. I want to believe that we can improve the current system by building improvements on top of it, eventually getting to something different. Though, what I really believe is that we just need to start something new entirely, as big and maybe impractical as that sounds.

What do you think?

2 thoughts on “What do you think of SchoolSparrow? Part 1

  1. this is so nicely laid out! Glad you are continuing to engage with him.

    On Sat, Jan 29, 2022 at 10:22 AM School Diversity Notebook wrote:

    > Peter, Center for Education and Civil Rights at Penn State posted: ” If > you’ve followed the debate about GreatSchools.org ratings, you might have > also heard about SchoolSparrow.com. Positioned as an equity-oriented > alternative school rating site, SchoolSparrow started about ten years ago > in Chicagoland, and it went nation” >

    Liked by 1 person

  2. Pingback: What do you think of SchoolSparrow? Part 2 | School Diversity Notebook

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s