Spelling Test Auto-Grader for Catch Up and Read
“Automation is good, so long as you know exactly where to put the machine.”
~ Eliyahu Goldratt
One perk of being a programmer is the ability to automate boring, repetitive tasks to ease my workload. But what’s even more rewarding is the opportunity to help non-programmers with their data analysis drudgery. Such is the case with this project.
The Problem
My mom works for Catch Up and Read, a Dallas non-profit that helps below-reading-level students catch up before third grade. Their problem was quickly and easily grading third-grade spelling tests. By analyzing patterns of how a student misspells words, a smart teacher can determine what concepts that student struggles with and modify their lesson plan accordingly. This analysis is tedious, requires a fair amount of calculation, and if you’ve made it this far you know where this is going. Automation!!
Constraints
This program had an interesting soup of additional constraints that made it especially fun to write:
- The program must be intuitive - Any teacher should be able to walk up and use it without training. This means the program needs to be extremely robust and have a very pretty user interface.
- The program must be easily distributable - It can’t require installing a program or programming language onto the user’s computer
- The program has to be accessible through public school wifi - no easy feat with archaic whitelist-based firewalls.
Given these constraints, I decided to commit a cardinal sin of web development - combining css, javascript, and HTML on a single file. Doing so would allow me to distribute an HTML file containing the program to teachers, bypassing the firewall issue entirely.
Implementation
Jumping right in: each word on the test can be split up into chunks, all of which are seen above. Teachers normally go through each word the child has misspelled and determine which chunks have been missed. They then tally up each of these incorrect chunks and highlight the most missed chunks on the sheet seen above.
In order to mimic this behavior, my program splits the word into “runs” of vowels and consonants. English also has more complex chunks made up of vowels and consonants: the program uses Regex to handle these special cases. For the intents and purposes of this program (and the English language, for that matter) these special-case chunks are vowels.
An example
Let’s consider the word fright
- The first match is fr, a run of 2 consonants
- Next, it matches igh, a special case chunk called a Long Vowel
- Finally, the program matches t, a single consonant
So, this word chunked up is ['fr', 'igh', 't']
Below is an example of how the program grades a student who entered ‘freit’ instead of fright.
In order to grade the incorrect word, we need to chunk-ify the correct word and compare the two.
We first (1) determine the vowel-consonant pattern of the valid word: for ‘fright’, this pattern is [consonant, vowel, consonant]
.
Then, we (2) split each word’s chunks into two queues: one for vowels, and the other for consonants. (step 2)
We then use this pattern to (3) dequeue the vowels and consonants in the correct order
and determine if they match. Finally, we (4) count up every incorrect chunk and display it back to the user.
Word Analysis
- “blayde” marks both ‘a’ and ’e’ incorrect because together they form an incorrect long vowel.
- when the program doesn’t have enough chunks to work with, as is the case with “kat” being entered for the word “camped”, it marks these extra chunks as incorrect.
Chunk Analysis
- The analysis section displays which chunks a student has had repeated issues with.
- If the student’s answer differs greatly from the vowel-consonant pattern of the correct word, the program can mark chunks as wrong, even if they were correct in their own context. By clicking on these wrongly-marked chunks, they can be factored out of the calculations.
Takeaways
Outside of the core program, the difficulty of this this project lay in the sheer number of edge cases.
Word endings, long vowels, and unmatching vowel-consonant patterns all had to be handled gracefully.
While it would have been great to deploy this to a website to show it off, saving the file locally just made more sense.
The webapp is now up and running, hosted on Netlify! See the update here.
In the future, I’d love to expand this program to work with the Spanish-language tests as well. Doing so would be a slog (mo chunks, mo problems). If anyone wants to take a crack at decoding Spanish, here’s the Github Repo.