Black-box testing
From Wikipedia the free encyclopedia
Black box systems | |
---|---|
System | |
Black box, Oracle machine | |
Methods and techniques | |
Black-box testing, Blackboxing | |
Related techniques | |
Feed forward, Obfuscation, Pattern recognition, White box, White-box testing, Gray-box testing, System identification | |
Fundamentals | |
A priori information, Control systems, Open systems, Operations research, Thermodynamic systems | |
Black-box testing, sometimes referred to as specification-based testing,[1] is a method of software testing that examines the functionality of an application without peering into its internal structures or workings. This method of test can be applied virtually to every level of software testing: unit, integration, system and acceptance. Black-box testing is also used as a method in penetration testing, where an ethical hacker simulates an external hacking or cyber warfare attack with no knowledge of the system being attacked.
Test procedures
[edit]Specification-based testing aims to test the functionality of software according to the applicable requirements.[2] This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case.
Specific knowledge of the application's code, internal structure and programming knowledge in general is not required.[3] The tester is aware of what the software is supposed to do but is not aware of how it does it. For instance, the tester is aware that a particular input returns a certain, invariable output but is not aware of how the software produces the output in the first place.[4]
Test cases
[edit]Test cases are built around specifications and requirements, i.e., what the application is supposed to do. Test cases are generally derived from external descriptions of the software, including specifications, requirements and design parameters. Although the tests used are primarily functional in nature, non-functional tests may also be used. The test designer selects both valid and invalid inputs and determines the correct output, often with the help of a test oracle or a previous result that is known to be good, without any knowledge of the test object's internal structure.
Test design techniques
[edit]Typical black-box test design techniques include decision table testing, all-pairs testing, equivalence partitioning, boundary value analysis, cause–effect graph, error guessing, state transition testing, use case testing, user story testing, domain analysis, and syntax testing.[5][6]
Test coverage
[edit]Test coverage refers to the percentage of software requirements that are tested by black-box testing for a system or application.[7] This is in contrast with code coverage, which examines the inner workings of a program and measures the degree to which the source code of a program is executed when a test suite is run.[8] Measuring test coverage makes it possible to quickly detect and eliminate defects, to create a more comprehensive test suite. and to remove tests that are not relevant for the given requirements.[8][9]
Effectiveness
[edit]Black-box testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.[10] An advantage of the black box technique is that no programming knowledge is required. Whatever biases the programmers may have had, the tester likely has a different set and may emphasize different areas of functionality. On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth without a flashlight."[11] Because they do not examine the source code, there are situations when a tester writes many test cases to check something that could have been tested by only one test case or leaves some parts of the program untested.
See also
[edit]References
[edit]- ^ Jerry Gao; H.-S. J. Tsao; Ye Wu (2003). Testing and Quality Assurance for Component-based Software. Artech House. pp. 170–. ISBN 978-1-58053-735-3.
- ^ Laycock, Gilbert T. (1993). The Theory and Practice of Specification Based Software Testing (PDF) (dissertation thesis). Department of Computer Science, University of Sheffield. Retrieved January 2, 2018.
- ^ Milind G. Limaye (2009). Software Testing. Tata McGraw-Hill Education. p. 216. ISBN 978-0-07-013990-9.
- ^ Patton, Ron (2005). Software Testing (2nd ed.). Indianapolis: Sams Publishing. ISBN 978-0672327988.
- ^ Forgács, István; Kovács, Attila (2019). Practical Test Design: Selection of Traditional and Automated Test Design Techniques. ISBN 978-1780174723.
- ^ Black, R. (2011). Pragmatic Software Testing: Becoming an Effective and Efficient Test Professional. John Wiley & Sons. pp. 44–6. ISBN 978-1-118-07938-6.
- ^ IEEE Standard Glossary of Software Engineering Terminology (Technical report). IEEE. 1990. 610.12-1990.
- ^ a b "Code Coverage vs Test Coverage". BrowserStack. Retrieved 2024-04-13.
- ^ Andrades, Geosley (2023-12-16). "Top 8 Test Coverage Techniques in Software Testing". ACCELQ Inc. Retrieved 2024-04-13.
- ^ Bach, James (June 1999). "Risk and Requirements-Based Testing" (PDF). Computer. 32 (6): 113–114. Retrieved August 19, 2008.
- ^ Savenkov, Roman (2008). How to Become a Software Tester. Roman Savenkov Consulting. p. 159. ISBN 978-0-615-23372-7.
External links
[edit]- BCS SIGIST (British Computer Society Specialist Interest Group in Software Testing): Standard for Software Component Testing, Working Draft 3.4, 27. April 2001.