Quality assurance isn't a step in the development cycle

by Gene Michael Stover

created Sunday, 2007-09-09 T 19:00:00Z
updated Sunday, 2011-01-09 T 20:00:00Z



Testing isn't a one-pass operation

Most software developers know that "How long will it take to test this?" is the wrong question to ask because it implies that we do the development, then hand it to the QA department to verify that the product is correct, & then we're done.

If we're so sure that we did the development correctly, so sure that testing will reveal at most a few problems that require a trivial amount of time to fix, then why test it at all? Anyone in the software industry knows that we test because all development efforts come with tons & tons of bugs; that's why we test.

Testing is not a one-pass operation. It's not even an operation that requires a few passes. It requires many passes1, & they often aren't distinct. It's a continuous & iterative process.

Testing is a dialog

How testing actually works is that the programmers hand the product to the testers, who give it a work-out & report their findings. This back-&-forth process repeats until the product is shippable.

Testing isn't a single pass; it's a dialog.2. It's a conversation between the programmers & the testers. The programmer is asking for feedback about the product. Another way to view it is that, with each build, the programmer is proposing a solution that fills the requirements, & the testers are critiquing that solution, as if the two groups were engaged in debate.

Notice that I'm not saying it should be this way. I'm saying that it is this way. It always works this way.

A better way to account for testing

Given that testing is a dialog, here's a better way to account for testing: Deliver the product to the QA department as soon as possible. Almost right away.

When is it ready for the testers?

The proper time to get your software to your QA department is as soon as you have a program that tries to run. It should be a null program; it does nothing other than startup & shutdown.

Your programmers should create a null app as soon as possible because that's the simplest program that tries to do something, so your QA department can work on it. Then the programmers should make sure the QA department gets a new build drop at frequent, reasonable intervals. Those build drops might happen when any of these are true:

Exercise for the reader: Consider how this proposal resembles or differs from Test-Driven Development.

Example: stand-alone GUI program

For example, assume you are developing a traditional, stand-alone, GUI program, such as a word processor that runs on Windows.

In this case, your programmers will first create an empty application, one that just has a main window & allows the user to click a close box.

Send that to your testers. Yep, I've left you gasping in disbelief, but I'm deadly serious. Send your do-nothing, empty, piece of crap, haven't-even-typed-any-code (if you used an IDE & a wizard), worthless, empty program to your testers, & keep sending them builds regularly.

Example: web service

If you are creating a web service or RPC server, give it a null function that just returns some unique value (like 29383 or something -- make it an integer or a string to be simple), & let your QA department write client programs that connect to it.

Example: application you don't know how to build

Let's assume you want to create a program that you don't know how to create.6 Maybe it relies on a technology you haven't used or a trick that you know can work in principle but you aren't sure how to make work in practice.

Development projects like that require the herculean hacking effort of one, two, or three programmers. On this type of project, that's where half the effort goes.

In these cases, it's not as important to get a null program to your QA department. It can wait until you have something that tries to do what you want, but that isn't a license to delay build drops for QA. It means your hackers can work until they think they've figured out a small part of the primary technique or trick they are researching. At that point, they should deliver something to the QA department. It might not be a true null app, but it needn't do anything other than maybe link with the special library.

So yes, if you have a project that requires research or herculean hacking, you needn't deliver a null app before doing the research. However, you should still deliver something to your QA department as soon as possible, & definitely before finishing even a single feature.

Specific benefits & problems this solves

Use all of your resources more of the time

Because testing is a conversation in which your QA department provide valuable feedback to your programmers, the earlier you involve your QA department, the more benefit you get from them. If you make them wait until (someone believes that) the product is ready for test, you are wasting your QA department.

Yes, they could be writing test plans & test scaffolding. They can be creating simulators & emulators, or they could be testing the actual product. (They'll still have to write the test plans & the scaffolding.)

Basically, you are wasting your resources when you make your QA department wait to do their job.

When to schedule a bug fix

Let's say that you use scrum, you assign a feature to a programmer in iteration N, & then you give the build containing that feature to your QA department in iteration N+1. A tester tries the feature & finds a bug that's so bad he can't continue testing until it's fixed.

Now we have a problem. The tester is blocked until this is finished, so he wants it fixed soon, but the programmer is already working on other scheduled work, so she'll be randomized if the tester brings the bug to her attention & asks to fix it. And whichever action we take, you're injecting un-planned work into the scrum iteration, thereby increasing risk.4

On the other hand, if your programmers have been sending build drops to the QA department since almost the beginning, & regularly, the QA department is testing changes that are the result of work on the current scrum iteration. Of course it's appropriate to schedule the bug fixes immediately because they apply to work that is officially in progress right now! The "do I pester the programmer now" problem is solved.

Integration problems are up-front

Just plain getting a program into its test environment can be a huge pain.5 They can ruin a schedule.

The sooner the QA department can begin testing, even if it's a null app, the sooner we can find these problems. Integration into the test environment may still be costly & risky, but at least we'll learn about it earlier instead of a week before the product is supposed to release.

Which brings me to...

Amortize costs

By beginning the testing early, you can begin the bug reports early, & with that, you can begin the bug fixing early. All three processes can be amortized over your project instead of being wedged into an unrealistic "two weeks" at the end, when you are scrambling to finish.

History of this idea (for me)

I've always noticed how development initially requires herculean, undesigned, unpredictable, hacking effort on the part of one, two, or three programmers to get something that tries to run. It won't fill the requirements, & if you look at it from the point of view of your ultimate users, you'll say it sucks. But it's something that tries to run; people say "You can tell it's supposed to be a ...". Once you get it into that state, it's all down-hill because you can run it, determine differences between the actual behaviour & the desired behaviour, & then change the code. You can show it to other people, & you all can point at it while you discuss it. Repeat until done.

Your testers are (among other things) professionals who observe & record those aforementioned differences. No one except the herculean hackers can use the software until they get it to the point where it tries to run, but as soon as they do, you should get it to your testers.

Notes

1
It requires many passes, & I sure can't estimate how many. Lots. If you can accurately estimate how many passes (& I don't doubt that someone, somewhere can), you're a bad-ass.
2
I hate this "dialog" analogy, but it's the best I have.
3
In fact, moving to a test phase because the schedule says so is such a bad idea that anyone believing it shouldn't be allowed by law to create software.
4
You can try to leave some room for un-planned work like this in your scrum iteration, but in my opinion, that's sloppy & just plain never works.
5
I've heard these called integration costs & problems.
6
It happens to me all the time.

End.