Testing is usually applied to different types of targets in different stages or levels of work effort. These levels are distinguished typically by which roles are best skilled to design and conduct the tests, and in which techniques are most appropriate for testing at each level. It’s important to ensure a balance of focus is retained across these different work efforts.
Developer Testing denotes the aspects of test design and implementation that it is most appropriate for the team of developers that designed and implemented the software to do. It is in contrast with the Independent Testing. In most cases, test execution will occur initially with the developer testing group that designed and implemented the test, but the developers create their tests so as to make their tests available to the independent testing groups for execution.
Traditionally, developer testing has been thought of mainly in terms of unit testing, with varying levels of focus on integration testing—dependent largely on culture and other context issues—with less focus on other aspects of testing. Following this traditional approach presents risks to software quality in that important testing concerns that are often discovered at the boundary of these distinctions are often ignored by the different groups assigned to focus on each “level”.
The better approach is to divide the work effort so that there is some planned overlap; the exact nature of that overlap based on the needs of the individual project. We recommend fostering an environment where developer and independent testers share in a single vision of quality.
Independent and Stakeholder Testing
Independent Testing denotes the test design and implementation that it is most appropriate for someone independent from the team of developers to do. This distinction can be considered a superset that includes Independent Verification & Validation. In most cases, test execution will occur initially with the independent testing group that designed and implemented the test, but the independent testers should create their tests so as to make their tests available to the developer testing groups for execution.
An alternative view of this independent testing is that it represents testing done based on the needs and concerns of various stakeholders, hence it is referred to as Stakeholder Testing. This is an important distinction: it helps to include a broader set of stakeholder concerns than might traditionally be considered, such as technical support staff, technical trainers, sales staff in additional to customers and end users.
As a final comment, XP‘s notion of customer tests relates to this categorization of independent testing in UPEDU.
A traditional distinction, unit tests, implemented early in the iteration, focuses on verifying the smallest testable elements of the software. Unit testing is typically applied to components in the implementation model to verify that control flows and data flows are covered and function as expected. These expectations are based on how the component participates in executing a use case, which you find from sequence diagrams for that use case. The Implementer performs unit test as the unit is developed. The details of unit tests are described in the Implementation discipline.
A traditional distinction, integration testing is performed to ensure that the components in the implementation model operate properly when combined to execute a use case. The target-of-test is a package or a set of packages in the implementation model. Often the packages being combined come from different development organizations. Integration testing exposes incompleteness or mistakes in the package’s interface specifications.
A traditional distinction, system testing is traditionally done when the software is functioning as a whole. An iterative lifecycle allows system testing to occur much earlier, as soon as well-formed subsets of the use case behavior are implemented. The target is typically end-to-end functioning elements of the system.
“User” acceptance testing is typically the final test action prior to deploying the software. The goal of acceptance testing is to verify that the software is ready and can be used by the end-users to perform those functions and tasks the software was built to do. See Concepts: Acceptance Testing for additional information.
There are other notions of “acceptance” testing, which are generally characterized by a hand-off from one group or team to another. For example a build acceptance test is the testing done to accept hand-over of a new software build from development into independent testing.