Print Page | Close Window

White-box vs. black-box testing

Printed From: One Stop Testing
Category: Types Of Software Testing @ OneStopTesting
Forum Name: Manual Testing @ OneStopTesting
Forum Discription: Discuss All that is need to be known about Manual Software Testing and its Tools.
URL: http://forum.onestoptesting.com/forum_posts.asp?TID=67
Printed Date: 17Aug2024 at 12:29pm


Topic: White-box vs. black-box testing
Posted By: Riya
Subject: White-box vs. black-box testing
Date Posted: 15Feb2007 at 5:14pm



White-box vs. black-box testing

        As I mentioned in my previous post, there's an ongoing discussion on the agile-testing mailing list on the merits of white-box vs. black-box testing. I had a lively exchange of opinions on this theme with Ron Jeffries. If you read my "Quick black-box testing example" post, you'll see the example of an application under test posted by Ron, as well as a list of back-box test activities and scenarios that I posted in reply. Ron questioned most of these black-box test scenarios, on the grounds that they provide little value to the overall testing process. In fact, I came away with the conclusion that Ron values black-box testing very little. He is of the opinion that white-box testing in the form of TDD is pretty much sufficient for the application to be rock-solid and as much bug-free as any piece of software can hope to be.

I never had the chance to work on an agile team, so I can't really tell if Ron's assertion is true or not. But my personal opinion is that there is no way developers doing TDD can catch several classes of bugs that are outside of their code-only realm. I'm thinking most of all about the various quality criteria categories, also known as 'ilities', popularized by James Bach. Here are some of them: usability, installability, compatibility, supportability, maintainability, portability, localizability. All these are qualities that are very hard to test in a white-box way. They all involve interactions with the operating system, with the hardware, with the other applications running on the machine hosting the AUT. To this list I would add performance/stress/load testing, security testing, error recoverability testing. I don't see how you can properly test all these things if you don't do black-box testing in addition to white-box type testing.

In fact, there's an important psychological distinction between developers doing TDD and 'traditional' testers doing mostly black-box testing. A developer thinks "This is my code. It works fine. In fact, I'm really proud of it.", while a tester is more likely to think "This code has some really nasty bugs. Let me discover them before our customer does." These two approaches are complementary. You can't perform just one at the expense of the other, or else your overall code quality will suffer. You need to build code with pride before you try to break it in various devious ways.

Here's one more argument from Ron as to why white-box testing is more valuable than black-box testing:

To try to simplify: the search method in question has been augmented with an integer "hint" that is used to say where in the large table we should start our search. The idea is that by giving a hint, it might speed up the search, but the search must always work even if the hint is bad.

The question I was asking was how we would test the hinting aspect.

I expect questions to arise such as those Michael Bolton would suggest, including perhaps:

What if the hint is negative?
What if the hint is after the match?
What if the hint is bigger than the size of the table?
What if integers are actually made of cheese?
What if there are more records in the table than a 32-bit int?

Then, I propose to display the code, which will include, at the front, some lines like this:

if (hint < 1) hint = 0;
if (hint > table.size) hint = 0;

Then, I propose to point out that if we know that code is there, there are a couple of tests we can save. Therefore white box testing can help make testing more efficient, QED.




Print Page | Close Window