Automated tests are better than manual tests for avoiding the Exhaustive Testing
Exhaustive testing is, with sufficient effort and tool support, feasible for all software
It is normally impossible to test all input / output combinations for a software system
The purpose of testing is to demonstrate the absence of defects
Determine whether enough component testing was executed. proposes preventing activities.
Cause as many failures as possible so that faults can be identified and corrected.
Prove that all faults are identified analyzes, and removes the causes of failures in the software.
Prove that any remaining faults will not cause any failures
Setting or defining test objectives
Reviewing the test basis because the users need to execute less tests
Creating test suites from test procedures requirement engineers building software models (e.g. state transition diagrams), which do not match the requirements
Analyzing lessons learned for process improvement
The product crashed when the user selected an option in a dialog box
One source code file included in the build was the wrong version
The computation algorithm used the wrong input variables.
The developer misinterpreted the requirement for the algorithm
Testers and reviewers are not curious enough to find defects
Testers and reviewers are not qualified enough to find failures and faults
Testers and reviewers communicate defects as criticism against persons and not against the software product
Testers and reviewers expect that defects in the software product have already been found and fixed by the developers.
B and C are true; A and D are false.
A and D are true; B and C are false.
A and C are true, B and D are false
C and D are true, A and B are false
Testing pinpoints (identifies the source of) the defects. Debugging analyzes the faults and proposes prevention activities.
Dynamic testing shows failures caused by defects. Debugging finds, analyzes, and removes the causes of failures in the software
Testing removes faults. Debugging identifies the causes of failures
Dynamic testing prevents causes of failures. Debugging removes the failures
The process of testing an integrated system to verify that it meets specified requirements
The process of testing to determine the compliance of a system to coding standards
Testing without reference to the internal structure of a system
Testing system attributes, such as usability, reliability or maintainability
To adapt the models to the context of project and product characteristics
To choose the waterfall model because it is the first and best proven model
To start with the V-model and then move to either iterative or incremental models
To only change the organization to fit the model and not vice versa
Acceptance testing is always the final test level to be applied.
All test levels are planned and completed for each developed feature.
Testers are involved as soon as the first piece of code can be executed
For every development activity there is a corresponding testing activity
Correction of defects during the development phase.
Planned enhancements to an existing operational system
Complaints about system quality during user acceptance testing
Integrating functions during the development of a new system
A, C and D and E are true; B is false.
A, C and E are true; B and D are false.
C and D are true; A, B and E are false.
B and E are true; A, C and D are false
Component testing verifies the functioning of software modules, program objects, and classes that are separately testable, whereas system testing verifies interfaces between components and interactions with different parts of the system.
Test cases for component testing are usually derived from component specifications, design specifications, or data models, whereas test cases for system testing are usually derived from requirement specifications, functional specifications or use cases.
Component testing focuses on functional characteristics, whereas system testing focuses on functional and non-functional characteristics.
Component testing is the responsibility of the technical testers, whereas system testing typically is the responsibility of the users of the system.
Initiation, status, preparation, review meeting, rework, follow up.
Planning, preparation, review meeting, rework, closure, follow up.
Planning, kick off, individual preparation, review meeting, rework, follow up.
Preparation, review meeting, rework, closure, follow up, root cause analysis.
Informal review and Technical Review
Management review and Inspection
Inspection and Technical Review
Walkthrough and Inspection
Static analysis can be used as a preventive measure with appropriate process in place.
Static analysis can find defects that are not easily found by dynamic testing
Static analysis can result in cost savings by finding defects early
Static analysis is a good way to force failures into the software.
Decision D has not been tested completely
100% decision coverage has been achieved.
Decision E has not been tested completely.
Decision F has not been tested completely
A, B and D.
A and C.
A, B and C
A, C and D
The state table can be used to derive both valid and invalid transitions
The state table represents all possible single transitions.
The state table represents only some of all possible single transitions
The state table represents sequential pairs of transitions
A, B and E are true; C and D are false.
A, C and D are true; B and E are false.
A and E are true; B, C and D are false.
A and B are true; C, D and E are false
Equivalence Partitioning, decision tables, state transition, and boundary value. AND Equivalence Partitioning, decision tables, use case.
Equivalence Partitioning, decision tables, checklist based, statement coverage, use case. AND Equivalence Partitioning, decision tables, use case.
Equivalence Partitioning, cause-effect graph, checklist based, decision coverage, use case. AND Equivalence Partitioning, decision tables, use case.
Equivalence Partitioning, cause-effect graph, checklist based, decision coverage and boundary value AND Equivalence Partitioning, decision tables, use case.
A and D are true; B and C are false
A is true; B, C and D are false
A and B are true; C and D are false.
C is true; A, B and D are false.
Experience, defect and failure data, knowledge about software failures.
Risk analysis performed at the beginning of the project.
Use Cases derived from the business flows by domain experts.
Expected results from comparison with an existing system.
Use Case Testing
Boundary value analysis.
State transition testing.
Boundary value analysis
State transition testing
Decision table testing
Only A is true; B, C and D are false
Only B is true; A, C and D are false.
A and D are true; B, C are false.
Only C is true; A, B and D are false
The test manager plans testing activities and chooses the standards to be followed, while the tester chooses the tools and controls to be used.
The test manager plans, organizes and controls the testing activities, while the tester specifies, automates and executes tests.
The test manager plans, monitors and controls the testing activities, while the tester designs tests.
The test manager plans and organizes the testing and specifies the test cases, while the tester prioritizes and executes the tests.
Low quality of requirements, design, code and tests.
Political problems and delays in especially complex areas in the product.
Error-prone areas, potential harm to the user, poor product characteristics. Error-prone areas, potential harm to the user, poor product characteristics.
Problems in defining the right requirements, potential failure areas in the software or system.
Thoroughness measures, reliability measures, test cost, schedule, state of defect correction and residual risks.
Thoroughness measures, reliability measures, degree of tester independence and product completeness.
Thoroughness measures, reliability measures, test cost, time to market and product completeness, availability of testable code.
Time to market, residual defects, tester qualification, degree of tester independence, thoroughness measures and test cost
R4 > R5 > R1 > R2 > R3 > R7 > R8 > R6 > R9.
R1 > R2 > R3 > R4 > R5 > R7 > R8 > R6 > R9.
R1 > R2 > R4 > R5 > R3 > R7 > R8 > R6 > R9.
R1 > R2 > R3 > R7 > R8 > R4 > R5 > R6 > R9.
More work gets done because testers do not disturb the developers all the time.
Independent testers tend to be unbiased and find different defects than the developers.
Independent testers do not need extra education and training.
Independent testers reduce the bottleneck in the incident management process.
Skill and staff shortages.
Poor software characteristics.
Failure-prone software delivered.
Possible reliability defect (bug).
The number of test cases using Black Box techniques
A summary of the major testing activities, events and its status in respect of meeting goals
Overall evaluation of each development work item
Training taken by members of the test team to support the test effort
Impact, incident description, date and time, your name.
Unique id for the report, special requirements needed.
Transmitted items, your name and you’re feeling about the defect source.
Incident description, environment, expected results.
1, 2, 3, 5.
1, 4, 6, 7.
2, 3, 4, 7.
3, 4, 5, 6.
A table with test input data, action words, and expected results, controls execution of the system under test.
Actions of testers recorded in a script that is rerun several times.
Actions of testers recorded in a script that is run with several sets of test input data.
The ability to log test results and compare them against the expected results, stored in a text file.
To evaluate how the tool fits with existing processes and practices.
To determine use, management, storage, and maintenance of the tool and test assets
To assess whether the benefits will be achieved at reasonable cost.
To reduce the defect rate in the Pilot Project.
To build traceability between requirements, tests, and bugs.
To optimize the ability of tests to identify failures
To resolve defects faster
To automate selection of test cases for execution.