Tips for test planning

A frequent question from students: "how many test cases should we write?"; Answer: "As many as you want" (Huh?)

There is no need to write a single test case if you are sure that your system works perfectly. Coming back to the reality, it is not unusual to have more testing code (i.e., test cases) than functional code. On the other hand, the number of test cases by itself does not guarantee a good product. It is the quality of the test cases that matters.

More importantly, you should adjust your level of testing based on ...

The type of product you are building: If you are building a proof-of-concept prototype, concentrate testing functions related to the concept you are trying to prove. If you are building a product that will be used by a client, you need much more testing, including testing for invalid inputs and improper use.
How it will be evaluated: If the product is evaluated based on a demo and you will be in control of the demo, test the functionality you will use during the demo. If you may not be in control of the demo (that is, the evaluator will direct the flow of the demo), also test alternate paths of the use cases you will demo.
How much the product quality will affect the grade: Which aspects of the product you should test (performance, accuracy, error handling, usability, etc.) and how much you should test each depends on what % of your grade depends on it. Even if concrete % figures are not given, try to find out what aspects the evaluator values most.
Testing coupled with subsequent debugging and bug fixing will take the biggest bite out of your project time. However, if check your project plan right now you will realize you gave testing a much smaller share of resources than development. Most student teams underestimate the testing effort.

When correctness is essential, at least ~25-35% of schedule should be for system level testing. That is not counting developer testing.

Another good rule of thumb is unit tests should be same size as production code [UML Distilled]

"We test everything" is not a test plan. "We test from 13th to 19th" is not detailed enough. A test plan is not a list of test cases either.

A test plan includes what will be tested in what order by whom, when will you start/finish testing, and what techniques and tools will be used.

Testing is so important that you should put someone in charge of it even if you are not following the guru concept. However, this does not mean only that person will do testing.

If you have to move 100 bricks from point A to B in within 10 hours, which method do you prefer: carry all 100 bricks and run from A to B during the last 10 minutes, or walk 10 times from A to B carrying 10 bricks at a time? If you prefer the latter, insist that the team follows an iterative development process; it will make the testing feel like walking with 10 bricks rather than running with 100 bricks.

Code Complete (page 504) says "... test-first programming is one of the most beneficial software practices to emerge during the past decade and is a good general approach". Test-Driven Development (TDD) advocates writing test cases before writing the code. While TDD has its share of detractors, it is considered an exciting way to develop code, and its benefits outweigh the drawbacks (if any). It is certainly suitable for student projects. It might feel a bit counter-intuitive at first, but it feels quite natural once you got used to it.

Decide what testing policies you are going to apply for your project (e.g., how to name test classes, the level of testing expected from each member, etc.). This can be done by the testing guru.

Here are some reasonable test policies you could adopt (examples only):

For large modules, module users should write black-box tests in TDD fashion. For smaller modules, the module developer should write white-box tests (TDD optional).
Standalone utilities likely to be reused should be tested more than onetime-use modules. We use assertions to protect them against misuse/overuse.
An API exposed to other team members should be tested more than those we write for our own use.
When unit-testing a higher-level component H that depends on a lower-level component L: if L is "trusted" (i.e., already unit-tested), we do not replace L with a stub when unit testing H. However, if H is a highly critical component, we insist unit-testing it in 100% isolation from L.

Supervisors often get the question "Isn't it enough just to do system testing - If the whole works, then parts must surely work, right?"

Bugs start like small fish in a small pool (pool = the module in which the bug resides). You should catch them with little nets of unit testing. Let them loose in the ocean (i.e., if you integrate the buggy module into the system), they multiply and hide, and become very hard to catch.
If you caught your own bug using unit testing, you could exterminate it without anybody else knowing. If you wait till the system testing, and the bug will be counted against your name - oh, the shame!
In general, the later you encounter a bug, the costlier it will be to find/fix.

Cross-testing means you let a teammate test a module you developed. This does not mean you do not test it yourself; cross-testing is done in addition to your own testing. Cross-testing is additional work, delays the project, and is against the spirit of "being responsible for the quality of your own work". You should use it only when there is a question of "low quality work" by a team member or when the module in question is a critical component. Any bug found during cross-testing should go on the record, and should be counted against the author of the code.

Every one must unit-test their own code, and do a share of other types of testing as well (i.e., integration/system testing). If your course allows choosing dedicated testers, choose someone competent. You may or may not choose your best resource as the tester. However, testing is too important to entrust to the weakest member of your team.