Following is a summary of the various tests that are performed on new and revised hardware and software. Testing is a critical part of the development of internal information systems as well as commercial software, and it is often not given the attention it deserves.|
Software vendors that provide critical infrastructure such as operating systems have managed to relegate a great amount of testing to thousands of their users, who, looking forward to improvements in the next version of the critical products they use, are more than willing to try out buggy software and report problems.
Following is a summary in alphabetical order of the types of testing that are performed.
The test performed by users of a new or changed system in order to approve the system and go live. See user acceptance test.
Introducing test data and analyzing the results. Contrast with "passive test" (below).
Ad Hoc Test
Informal testing without a test case.
Age Test (aging)
Evaluating a system's ability to perform in the future. To perform these tests, hardware and/or test data are modified to a future date.
The first testing of a product in the lab. Then comes beta testing. See alpha test.
Using software to test software. Automated tests may still require human intervention to monitor stages for analysis or errors.
Testing by end users. Follows alpha testing. See beta test.
Black Box Test
Testing software based on output only without any knowledge of its internal code or logic. Contrast with "white box test" and "gray box test."
Same as "negative test."
A test of new software that determines whether all transactions flow properly between input, output and storage devices. See environment test.
Testing functional requirements of software, such as menus and key commands. See functional test.
Testing for software bugs by feeding it randomly generated data. See fuzz testing.
Gray Box Test
Testing software with some knowledge of its internal code or logic. Contrast with "white box test" and "black box test."
Using invalid input to test a program's error handling.
Monitoring the results of a running system without introducing any special test data. Contrast with "active test" (above).
Testing a system's ability to recover from a hardware or software failure.
To test revised software to see if previously working functions were impacted. See regression testing.
Turn it on and see what happens. See smoke test.
Overall testing in the lab and in the user environment. See alpha test and beta test.
A set of test data, test programs and expected results. See test case.
A set of test cases. See test scenario.
A collection of test cases and/or test scenarios. See test suite.
A test of one component of the system. Contrast with "system test."
User Acceptance Test (UAT)
See "acceptance test" above.
White Box Test
Testing software with complete knowledge of its internal code and logic. Contrast with "black box test" and "gray box test."