At first, I
thought it would be especially difficult to ensure the quality of a
cookbook. Instead of a few application-sized examples illustrating a
few coherent topics (say, database access in Java), we had to test 350
separate pieces of code on 350 wide-ranging topics. As with a software
project, we had deadlines to meet.
Worse, due to the structure of our contract and the scarcity
of proofreader time, this book was essentially a Waterfall project. Up
to the very end, our focus was on getting everything written, with some
time allocated afterward to edit the text and code. This isn't quite as
crazy as it sounds, because bad text is easier to whip into shape than
bad code, but it meant I didn't have a lot of time to spend on building
test infrastructure.
Fortunately, despite my early misgivings, the Cookbook format
actually made the individual recipes easy to test. I was able to turn
pre-existing features of the recipes (the worked examples) into unit
tests. I ran the tests in an instrumented irb
session and generated a report that flagged any failures.
Thanks to the test framework, on a good day I could
proofread, debug, and verify the correctness of 30 recipes. I worked
faster and with greater confidence than I could doing everything by
hand. I was also able to incorporate the test results into the general
"confidence score" calculated for each recipe on my
In this article, I present a simplified, cleaned-up version
of my testing script. It parses recipe text into a set of code chunks
and assertions. It then runs the code chunks in an instrumented irb
session, and compares the assertions to reality.