Roles and Activities > Tester > Design Test

Purpose
  • To identify a set of verifiable test cases for each build.
  • To identify test procedures that show how the test cases will be realized.
Steps
Input Artifacts:
  • Component
  • Implementation Model
  • Supplementary Specifications
  • Test Plan
  • Use-Case
Resulting Artifacts:
  • Test Cases
Role: Tester
Guidelines:
  • Guidelines: Test Case

Workflow Details:
  • Test
    • Plan and Design Test

Identify and Describe Test Cases To top of page

Purpose
  • To identify and describe the test conditions to be used for testing
  • To identify the specific data necessary for testing
  • To identify the expected results of test

For each requirement for test:

Analyze application workflows

The purpose of this step is to identify and describe the actions and / or steps of the actor when interacting with the system. These test procedure descriptions are then used to identify and describe the test cases necessary to test the application.

Notes: These early test procedure descriptions should be high-level, that is, the actions should be described as generic as possible without specific references to actual components or objects.

For each use case or requirement,

  • review the use case flow of events, or
  • walk through and describing the actions / steps the actor takes when interacting with the system

Identify and describe test cases

The purpose of this step is to establish what test cases are appropriate for the testing of each requirement for test.

Note: If testing of a previous version has already been implemented, there will be existing test cases. These test cases should be reviewed for use and design as for regression testing. Regression test cases should be included in the current iteration and combined with the new test cases that address new behavior.

The primary input for identifying test cases is:

  • The use cases that, at some point, traverse your target-of-test (system, subsystem or component).
  • The design model.
  • Any technical or supplemental requirements.
  • Target-of-test application map (as generated by an automated test script generation tool).

Describe the test cases by stating:

  • The test condition (or object or application state) being tested.
  • The use case, use-case scenario, or technical or supplemental requirement the test cases is derived from.
  • The expected result in terms of the output state, condition, or data value(s).

Note: it is not necessary for all use cases, use-case scenarios, and technical or supplemental requirements to be tested.

The result of this step is a test case matrix that identifies the test conditions, the objects, data, or influences on the system that create the condition being tested, and the expected result.

See Artifact: Test Case and Guidelines: Test Case for additional information.

Identify test case data

Using the matrix created above, review the test cases and identify the actual values that support the test cases. Data for three purposes will be identified during this step:

  • data values used as input
  • data values for the expected results
  • data needed to support the test case, but is neither used as input or output for a specific test case

See Artifact: Test Case, Guidelines: Test Case for additional information.

Identify and Structure Test Procedures To top of page

Purpose
  • To analyze use case workflows and test cases to identify test procedures
  • To identify, in the test model, the relationship(s) between test cases and test procedures creating the test model

Perform the following:

Review application workflows or application map

Review the application workflow(s), and the previously described test procedures to determine if any changes have been made to the use case workflow that affects the identification and structuring of test procedures.

If utilizing an automated test script generation tool, review the generated application map (used to generate the test scripts) to ensure the hierarchical list of UI objects representing the controls in the user interface of the target-of-test are correct and relevant to your test and/or the use cases being tested.

The reviews are done in a similar fashion as the analysis done previously:

  • review the use case flow of events, and
  • review of the described test procedures, and
  • walk through the steps the actor takes when interacting with the system, and/or
  • review the application map

Develop the test model

The purpose of the test model is to communicate what will be tested, how it will be tested, and how the tests will be implemented. For each described test procedure (or application map and generated test scripts), the following is done to create the test model:

  • identify the relationship or sequence of the test procedure to other test procedures (or the generated test scripts to each other).
  • identify the start condition or state and the end condition or state for the test procedure
  • indicate the test cases to be executed by the test procedure (or generated test scripts).

The following should be considered while developing the test model:

  • Many test cases are variants of one another, which might mean that they can be satisfied by the same test procedure.
  • Many test cases may require overlapping behavior to be executed. To be able to reuse the implementation of such behavior, you can choose to structure you test procedures so that one test procedure can be used for several test cases.
  • Many test procedures may include actions or steps that are common to many test cases or other test procedures. In these instances, it should be determined if a separate structured test procedure (for those common steps) should be created, while the test case specific steps remain in a separate structured test procedure.
  • When using an automated test script generation tool, review the application map and generated test scripts to ensure the following is reflected in the test model:
    • The appropriate/desired controls are included in the application map and test scripts.
    • The controls are exercised in the desired order.
    • Test cases are identified for those controls requiring test data.
    • The windows or dialog boxes in which the controls are displayed.

Structure test procedures

The previously described test procedures are insufficient for the implementation and execution of test. Proper structuring of the test procedures includes revising and modifying the described test procedures to include, at a minimum, the following information:

  • Set-up: how to create the condition(s) for the test case(s) that is (are) being tested and what data is needed (either as input or within the test database).
  • Starting condition, state, or action for the structured test procedure.
  • Instructions for execution: the detailed steps / actions taken by the tester to implement and execute the tests (to the degree of stating the object or component).
  • Data values entered (or referenced test case).
  • Expected result (condition or data, or referenced test case) for each action / step.
  • Evaluation of results: the method and steps used to analyze the actual results obtained comparing them with the expected results.
  • Ending condition, state, or action for the structured test procedure.

Note: a described test procedure, when structured may become several structured test procedures which must be executed in sequence. This is done to maximize reuse and minimize test procedure maintenance.

Test procedures can be manually executed or implemented as test scripts (for automated execution). When a test procedure is automated, the resulting computer-readable file is known as a test script.

Review and Assess Test Coverage To top of page

Purpose
  • To identify and describe the measures of test that will be used to identify the completeness of testing

Perform the following:

Test coverage measures are used to identify how complete the testing is or will be.

Identify test coverage measures

There are two methods of determining test coverage:

  • Requirements based coverage.
  • Code based coverage.

Both identify the percentage of the total testable items that will be (or have been) tested, but they are collected or calculated differently.

  • Requirements based coverage is based upon using use cases, requirements, use case flows, or test conditions as the measure of total test items and can be used during test design.
  • Code based coverage uses the code generated as the total test item and measures a characteristic of the code that has been executed during testing (such as lines of code executed or the number of branches traversed). This type of coverage measurement can only be implemented after the code has been generated.

Identify the method to be used and state how the measurement will be collected, how the data should be interpreted, and how the metric will be used in the process.

Generate and distribute test coverage reports

Identified in the test plan is the schedule of when test coverage reports are generated and distributed. These reports should be distributed to, at least, the following roles:

  • all test roles
  • developer representative
  • share holder representative
  • stakeholder representative
Feedback © 2014 Polytechnique Montreal