Each test stage has a different purpose.
It is easier to manage testing in stages.
We can run different tests in different environments.
The more stages we have, the better the testing.
User acceptance testing
A minimal test set that achieves 100% LCSAJ coverage will also achieve 100% branch coverage.
A minimal test set that achieves 100% path coverage will also achieve 100% statement coverage.
A minimal test set that achieves 100% path coverage will generally detect more faults than one that achieves 100% statement coverage.
A minimal test set that achieves 100% statement coverage will generally detect more faults than one that achieves 100% branch coverage.
The system shall be user friendly.
The safety-critical parts of the system shall contain 0 faults.
The response time shall be less than one second for the specified design load.
The system shall be built to be portable.
Supplements formal test design techniques.
Can only be used in component, integration and system testing.
Is only performed in user acceptance testing.
Is not repeatable and should not be used.
Test coverage criteria can be measured in terms of items exercised by a test suite.
A measure of test coverage criteria is the percentage of user requirements covered.
A measure of test coverage criteria is the percentage of faults found.
Test coverage criteria are often used when specifying test completion criteria.
Find as many faults as possible.
Test high risk areas.
Obtain good test coverage.
Test whatever is easiest to test.
System tests are often performed by independent teams.
Functional testing is used more than structural testing.
Faults found during system tests can be very expensive to fix.
End-users should be involved in system tests.
Incidents should always be fixed.
An incident occurs when expected and actual results differ.
Incidents can be analyzed to assist in test process improvement.
An incident can be raised against documentation.
Time runs out.
The required level of confidence has been achieved.
No more faults are found.
The users won’t find any serious faults.
Incident resolution is the responsibility of the author of the software under test.
Incidents may be raised against user requirements.
Incidents require investigation and/or correction.
Incidents are raised when expected and actual results differ.
Modified condition/decision coverage
In a system two different failures may have different severities.
A system is necessarily more reliable after debugging for the removal of a fault.
A fault need not affect the reliability of a system.
Undetected errors may lead to faults and eventually to incorrect behavior.
They are used to support multi-user testing.
They are used to capture and animate user requirements.
They are the most frequently purchased types of CAST tool.
They capture aspects of user behavior.
Metrics from previous similar projects
Discussions with the development team
Time allocated for regression testing
A & b
It states that modules are tested against user requirements.
It only models the testing phase.
It specifies the test techniques to be used.
It includes the verification of designs.
Is that there is some existing system against which test output may be checked.
Is that the tester can routinely identify the correct outcome of a test.
Is that the tester knows everything about the software under test.
Is that the tests are reviewed by experienced testers.
They are cheapest to find in the early development phases and the most expensive to fix in the latest test phases.
They are easiest to find during system testing but the most expensive to fix then.
Faults are cheapest to find in the early development phases but the most expensive to fix then.
Although faults are most expensive to find during early development phases, they are cheapest to fix then.
To find faults in the software.
To assess whether the software is ready for release.
To demonstrate that the software doesn’t work.
To prove that the software is correct.
Boundary value analysis
Features to be tested
Data flow testing
State transition testing
Possible communications bottlenecks in a program.
The rate of change of data values as a program executes.
The use of data on paths through the code.
The intrinsic complexity of the code.
Enable the code to be tested before the execution environment is ready.
Can be performed by the person who wrote the code.
Can be performed by inexperienced staff.
Are cheap to perform.
An inspection is lead by the author, whilst a walkthrough is lead by a trained moderator.
An inspection has a trained leader, whilst a walkthrough has no leader.
Authors are not present during inspections, whilst they are during walkthroughs.
A walkthrough is lead by the author, whilst an inspection is lead by a trained moderator.
It allows the identification of changes in user requirements.
It facilitates timely set up of the test environment.
It reduces defect multiplication.
It allows testers to become involved early in the project.
Tests the individual components that have been developed.
Tests interactions between modules or subsystems.
Only uses components that form part of the live system.
Tests interfaces to other systems.
The analysis of batch programs.
The reviewing of test plans.
The analysis of program code.
The use of black box testing.
Post-release testing by end user representatives at the developer’s site.
The first testing that is performed.
Pre-release testing by end user representatives at the developer’s site.
Pre-release testing by end user representatives at their sites.
Found in the software; the result of an error.
Departure from specified behavior.
An incorrect step, process or data definition in a computer program.
A human action that produces an incorrect result.
£4800; £14000; £28000
£5200; £5500; £28000
£28001; £32000; £35000
£5800; £28000; £32000
Makes test preparation easier.
Means inspections are not required.
Can prevent fault multiplication.
Will find all faults.
Reviews cannot be performed on user requirements specifications.
Reviews are the least effective way of testing code.
Reviews are unlikely to find faults in test plans.
Reviews should be performed on specifications, code, and test plans.
Linkage of customer requirements to version numbers.
Facilities to compare test results with expected results.
The precise differences in versions of software component source code.
Restricted access to the source code library.