Results Verification
So you've figured out the right way to
drive your program, and you have this great test case, but after you've
told your program to do stuff you need to have a way to know if it did
the right thing. This is the verification step in your automation, and
every automated script needs this.
You have three options. You can fake it, do it yourself, or use some kind of visual comparison tool.
Faking
verification is great. We don't mean that you just make up the answers,
we are talking about making some assumptions about the functionality of
your program and the specific functionality this automated test is
checking (once again, having a well defined scope is critical.) For
example when we are writing automation for the spelling engine in Visio
we wrote a test that typed some misspelled text into a shape: “teh “.
This should get autocorrected to “the “. But it's hard to
programmatically check if “the “ was correctly rendered to the screen.
Instead, we went and asked the shape for the text inside it and just
did a string compare with my expected result.
|