Spelling Test Grader [UPDATE]

One in six children who are not reading proficiently in third grade fail to graduate from high school on time, four times the rate for children with proficient third-grade reading skills. – "Double Jeopardy", Annie E. Casey Foundation

Since rolling out my spelling test grader, Catch Up and Read (CAR) has asked for more grading automation! Here’s a quick update on the new grader, the Phonics Survey.

What is a Phonics Survey?

Throughout the span of the CAR program, students need to be benchmarked. The phonics survey determines which consonant-vowel blend types a student struggles with. By asking students to read words aloud, teachers mark down which words they struggle with, and tally up the number of incorrect words per section. They then use this data to plan their lessons around words that students have trouble with.

phonics survey example section

phonics survey example section

Phonics surveys were historically administered through paper tests: a teacher would manually tally incorrect words, identify problem sections, and enter student information into a spreadsheet. Phonics surveys aren’t particularly labor-intensive, but as CAR began supporting more schools, it became obvious that this solution wouldn’t scale. Hence, automation!

Getting Online

The biggest annoyance with the first iteration of the auto-grader was that it was a locally-saved HTML file. This worked fine from an implementation standpoint, but many users complained. To non-tech people, the concept of a local file also being an interactable webpage is a bit obtuse, so I decided to bite the bullet and roll it out to the World Wide Web. Netlify made this easier than it had any right being: if any changes have been pushed to the main branch of the project’s Github, Netlify rolls these changes out to the production website. Their free tier offers more than enough bandwidth (100 GB!) for this project’s use case.

Phonics Survey Implementation

I wish I could make this glorified matching form sound sexy, but I can’t. There’s no algorithm, like the previous test. It just tallies the number of words that the user clicks, does a little formatting magic, and prints the page as a PDF. Regardless, there are still some interesting details I’d like to point out.

website example

website example

  • A PDF can’t be saved unless the student’s name, grade, and school are entered. This plaintext goes in the filename for easy recollection of data.
  • By pulling the current date, the program can automatically mark each test as [Beginning, Middle, or End of Year]. This is also referenced in the filename, easing recordkeeping.
  • The Save button automatically saves the webpage as a PDF with the colors encoded.
generated PDF example

generated PDF example

So What?

Look, this isn’t rocket surgery. In truth, even posting something something this trivial feels a bit juvenile, especially in comparison to how much cool stuff goes down on Hackaday. But instead of focusing on the technology, I’d like to focus on the application and outcome.

Engineers love efficient solutions. But in pursuit of this efficiency, usability can be the first thing to go by the wayside. If a user doesn’t use your tool because they can’t figure it out, then it’s not a good tool. My first version of this program was an objectively better solution: It used less memory, the data integrated with Excel, and it had a login system planned that allowed it to store student information to a MongoDB backend. When I showed it to users, all time-crunched and technically-challenged teachers, it was obvious over-optimization was holding back widespread adoption.

Around this time, I read a lot of posts from user-experience blog The Mobile Spoon when working on this project, which led to an epiphany of user-central design. So I rebuilt the form, creating what amounted to an online version of the existing paper tests. I nixed the integrations, opting to instead generate a PDF which would be saved locally. I made the PDF generation idiot-proof: hitting “Save” names the PDF with the student’s information and date, skips all dialog boxes, and automatically downloads to the Downloads folder.

Outcomes

This updated program immediately began pulling it’s weight: tests that used to take 30 minutes to grade were cut down to 5. Since CAR just hit 1000 students this year, this program (conservatively!) saved over 400 man-hours of manual test grading. Nearly every literacy coach organically mentioned this tool as a massive improvement in their end-of-year reflections. With the new consistency of data gathered, teachers were able to better identify student gaps, plan data-driven lessons, and push students to read on grade level.

Takeaways

  1. Keep It Simple, Stupid! (KISS) This mantra has continued to ring true throughout every project I’ve undertaken. Complexity has it’s place, but it’s better to have something bulletproof and simple than something flashy but unreliable.
  2. Think of the user: Users won’t necessarily appreciate that your program has O(n) space complexity. But they do appreciate being able to use your tools without an instruction manual or a degree in compsci.