8.1 Black Box Testing Technique

Black box testing is the Software testing method which is used to test the software without knowing the internal structure of code or program.

Most likely this testing method is what most of tester actual perform and used the majority in the practical life.

Basically software under test is called as “Black-Box”, we are treating this as black box & without checking internal structure of software we test the software. All testing is done as customer’s point of view and tester is only aware of what is software is suppose to do but how these requests are processing by software is not aware. While testing tester is knows about the input and expected output’s of the software and they do not aware of how the software or application actually processing the input requests & giving the outputs. Tester only passes valid as well as invalid inputs & determines the correct expected outputs. All the test cases to test using such method are calculated based on requirements & specifications document.

The main purpose of the Black Box is to check whether the software is working as per expected in requirement document & whether it is meeting the user expectations or not.

There are different types of testing used in industry. Each testing type is having its own advantages & disadvantages. So fewer bugs cannot be find using the black box testing or white box testing.

8.1.1 Boundary Value Analysis (BVA):

Boundary Value Analysis is the most commonly used test case design method for black box testing. As all we know the most of errors occurs at boundary of the input values. This is one of the techniques used to find the error in the boundaries of input values rather than the center of the input value range.

Boundary Value Analysis is the next step of the Equivalence class in which all test cases are design at the boundary of the Equivalence class.

Let us take an example to explain this:

Suppose we have software application which accepts the input value text box ranging from 1 to 1000, in this case we have invalid and valid inputs:

Invalid Input Valid Input Invalid Input
0 – less 1 – 1000 1001 – above

Here are the Test cases for input box accepting numbers using Boundary value analysis:

Min value – 1 0
Min Value 1
Min value + 1 2
Normal Value 1 – 1000
Max value – 1 999
Max value 1000
Max value +1 1001


A program validates a numeric field as follows: values less than 10 are rejected, values between 10 and 21 are accepted, values greater than or equal to 22 are rejected. Which of the following covers the MOST boundary values?

  1. 9,10,11,22
  2. 9,10,21,22
  3. 10,11,21,22
  4. 10,11,20,21


We have already come up with the classes as shown in question 5. The boundaries can be identified as 9, 10, 21, and 22. These four values are in option ‘b’. So answer is ‘B’

8.1.2 Equivalence Class Partitioning

Equivalence partitioning is a Test Case Design Technique to divide the input data of software into different equivalence data classes. Test cases are designed for equivalence data class. The equivalence partitions are frequently derived from the requirements specification for input data that influence the processing of the test object. A use of this method reduces the time necessary for testing software using less and effective test cases.

Equivalence Partitioning = Equivalence Class Partitioning = ECP

It can be used at any level of software for testing and is preferably a good technique to use first. In this technique, only one condition to be tested from each partition. Because we assume that, all the conditions in one partition behave in the same manner by the software. In a partition, if one condition works other will definitely work. Likewise we assume that, if one of the condition does not work then none of the conditions in that partition will work.

Equivalence partitioning is a testing technique where input values set into classes for testing.

  • Valid Input Class = Keeps all valid inputs.
  • Invalid Input Class = Keeps all Invalid inputs.

Example of Equivalence Class Partitioning

  • A text field permits only numeric characters
  • Length must be 6-10 characters long

Partition according to the requirement should be like this:

While evaluating Equivalence partitioning, values in all partitions are equivalent that’s why 0-5 are equivalent, 6 – 10 are equivalent and 11- 14 are equivalent.

At the time of testing, test 4 and 12 as invalid values and 7 as valid one.

It is easy to test input ranges 6–10 but harder to test input ranges 2-600. Testing will be easy in the case of lesser test cases but you should be very careful. Assuming, valid input is 7. That means, you belief that the developer coded the correct valid range (6-10).



A program validates a numeric field as follows: values less than 10 are rejected, values between 10 and 21 are accepted, values greater than or equal to 22 are rejected. Which of the following input values cover all of the equivalence partitions?

  1. 10,11,21
  2. 3,20,21
  3. 3,10,22
  4. 10,21,22


We have to select values which fall in all the equivalence class (valid and invalid both). The classes will be as follows:

Class I: values <= 9 => invalid class 
	Class II: 10 to 21	=> valid class 
	Class III: values >= 22 => invalid class

All the values from option ‘c’ fall under all different equivalence class. So the answer is ‘C’.

8.1.3 Partitioning

There are two types of partitioning:

  1. 1. Equivalence partitioning: Equivalence partitioning is a software testing technique that divides the input data of a software unit into partitions of equivalent data from which test cases can be derived. In principle, test cases are designed to cover each partition at least once.
  2. 2. Boundary value analysis: Boundary value analysis is a software testing technique in which tests are designed to include representatives of boundary values in a range. The idea comes from the boundary.

For the above example, we are partitioning the values into a subset or the subset. We are partitioning the age into the below classes:

  1. Class 1: Children with age group 5 to 10
  2. Class 2 : Children with age group less than 5
  3. Class 3: Children with age group age 10 to 15
  4. Class 4: Children with age group greater than 15.

Which values of the classes to be tested?

The values picked up for testing should be Boundary values:

  1. Boundaries are representatives of the equivalence classes we sample them from. They’re more likely to expose an error than other class members, so they’re better representatives.
  2. The best representative of an equivalence class is a value in between the range.

For the above example we have the following classes to be tested:

For example for the scenario #1:

1. Class 1: Children with age group 5 to 10 (Age >5 and <=10)

Boundary values:

  1. Values should be Equal to or lesser than 10. Hence, age 10 should be included in this class.
  2. Values should be greater than 5. Hence, age 5 should not be included in this class.
  3. Values should be Equal to or lesser than 10. Hence, age 11 should not be included in this class.
  4. Values should be greater than 5. Hence, age 6 should be included in this class.

Equivalence partition Values:

Equivalence partition is referred when one has to test only one condition from each partition. In this, we assume that if one condition in a partition works, then all the condition should work. In the same way, if one condition in that partition does not work then we assume that none of the other conditions will work. For example,

(Age >5 and <=10)

As the values from 6 to 10 are valid ones, one of the values among 6,7,8,9 and 10 have to be picked up. Hence selected age “8” is a valid input age for the age group between (Age >5 and <=10). This sort of partition is referred as equivalence partition.

Scenario Boundary values to be taken Equivalence partitioning values
Boy – Age >5 and <=10 Input age = 6

Input age = 5

Input age = 11

Input age = 10

Input age = 8
Girl – Age >5 and <=10 Input age = 6

Input age = 5

Input age = 11

Input age = 10

Input age = 8
Boy – Age >10 and <=15 Input age = 11

Input age = 10

Input age = 15

Input age = 16

Input age = 13
Girl – Age >10 and <=15 Input age = 11

Input age = 10

Input age = 15

Input age = 16

Input age = 13
Age<=5 Input age = 4

Input age = 5

Input age = 3
Age >15 Input age = 15

Input age = 16

Input age = 25

How do we determine whether the program passed or failed the test?

Passing the functionality not only depends upon the results of the above scenarios. The input given and the expected output will give us the results and this requires domain knowledge.

Determining the results of the example:

Hence, if all the test cases of the above pass, the domain of issuing tickets in the competition gets passed. If not, the domain gets failed.

8.1.4 Decision Table

Decision table testing is a testing technique used to test system behavior for different input combinations. This is a systematic approach where the different input combinations and their corresponding system behavior (Output) are captured in a tabular form. That is why it is also called as a Cause-Effect table where Cause and effects are captured for better test coverage.

A Decision Table is a tabular representation of inputs versus rules/cases/test conditions. Let’s learn with an example.

Example 1: Decision Base Table for Login Screen

Let’s create a decision table for a login screen.

The condition is simple if the user provides correct username and password the user will be redirected to the homepage. If any of the input is wrong, an error message will be displayed.

Conditions Rule 1 Rule 2 Rule 3 Rule 4
Username (T/F) F T F T
Password (T/F) F F T T
Output (E/H) E E E H


T – Correct username/password

F – Wrong username/password

E – Error message is displayed

H – Home screen is displayed


  • Case 1 – Username and password both were wrong. The user is shown an error message.
  • Case 2 – Username was correct, but the password was wrong. The user is shown an error message.
  • Case 3 – Username was wrong, but the password was correct. The user is shown an error message.
  • Case 4 – Username and password both were correct, and the user navigated to homepage

While converting this to test case, we can create 2 scenarios ,

  • Enter correct username and correct password and click on login, and the expected result will be the user should be navigated to homepage

And one from the below scenario

  • Enter wrong username and wrong password and click on login, and the expected result will be the user should get an error message
  • Enter correct username and wrong password and click on login, and the expected result will be the user should get an error message
  • Enter wrong username and correct password and click on login, and the expected result will be the user should get an error message

As they essentially test the same rule.

8.1.5 Orthogonal Array

Orthogonal Array Testing strategy is one of the Test Case Optimization techniques.

Orthogonal Testing is a Black Box Testing – test cases optimization technique used when the system to be tested has huge data inputs.

For example, when a train ticket has to be verified, the factors such as – the number of passengers, ticket number, seat numbers and the train numbers has to be tested, which becomes difficult when a tester verifies input one by one. Hence, it will be more efficient when he combines more inputs together and does testing. Here, we can use the Orthogonal Array testing method.

This type of pairing or combining of inputs and testing the system to save time is called Pairwise testing. OATS technique is used for pairwise testing.

Why OAT (Orthogonal Array Testing)

  • Systematic and Statistical way to test pairwise interactions
  • Interactions and Integration points are a major source of defects.
  • Execute a well-defined, concise of test cases that are likely to uncover most (not all) bugs.
  • Orthogonal approach guarantees the pairwise coverage of all variables.

How OAT’s is represented

OA’s are commonly represented as:

  • Runs (N) – Number of rows in the array, which translates into a number of test cases that will be generated.
  • Factors (K) – Number of columns in the array, which translates into a maximum number of variables that can be handled.
  • Levels (V) – Maximum number of values that can be taken on any single factor.

A single factor has 2 to 3 inputs to be tested. That maximum number of inputs decide the Levels.

How to use this technique:

  • Identify the independent variable for the scenario.
  • Find the smallest array with the number of runs.
  • Map the factors to the array.
  • Choose the values for any “left over” levels.
  • Transcribe the Runs into test cases, adding any particularly suspicious combinations that aren’t generated.


A Web page has three distinct sections (Top, Middle, Bottom) that can be individually shown or hidden from user

  • No of Factors = 3 (Top, Middle, Bottom)
  • No of Levels (Visibility) = 2 (Hidden or Shown)
  • Array Type = L4(23)

(4 is the number of runs arrived after creating the OAT array)

If we go for Conventional testing technique, we need test cases like: 2 X 3 = 6 Test Cases

Test Cases Scenarios Values to be tested
Test #1 HIDDEN Top
Test #2 SHOWN Top
Test #3 HIDDEN Bottom
Test #4 SHOWN Bottom
Test #5 HIDDEN Middle
Test #6 SHOWN Middle

If we go for OAT Testing we need: 4 Test cases as shown below:

Test Cases TOP Middle Bottom
Test #1 Hidden Hidden Hidden
Test #2 Hidden Visible Visible
Test #3 Visible Hidden Visible
Test #4 Visible Visible Hidden

8.1.6 State Transition Diagram

State Transition testing, a black box testing technique, in which outputs are triggered by changes to the input conditions or changes to ‘state’ of the system. In other words, tests are designed to execute valid and invalid state transitions.

When to use?

  • When we have sequence of events that occur and associated conditions that apply to those events
  • When the proper handling of a particular event depends on the events and conditions that have occurred in the past
  • It is used for real time systems with various states and transitions involved

Deriving Test cases:

  • Understand the various state and transition and mark each valid and invalid state
  • Defining a sequence of an event that leads to an allowed test ending state
  • Each one of those visited state and traversed transition should be noted down
  • Steps 2 and 3 should be repeated until all states have been visited and all transitions traversed
  • For test cases to have a good coverage, actual input values and the actual output values have to be generated


  • Allows testers to familiarise with the software design and enables them to design tests effectively.
  • It also enables testers to cover the unplanned or invalid states.


A System’s transition is represented as shown in the below diagram:

The tests are derived from the above state and transition and below are the possible scenarios that need to be tested.

Tests Test 1 Test 2 Test 3
Start State Off On On
Input Switch ON Switch Off Switch off
Output Light ON Light Off Fault
Finish State ON OFF On

8.1.7 Defect Age

Defect Age can be measured in terms of any of the following:

  • Time
  • Phases

Defect Age (In Time)


Defect Age (in Time) is the difference in time between the date a defect is detected and the current date (if the defect is still open) or the date the defect was fixed (if the defect is already fixed).


  • The ‘defects’ are confirmed and assigned (not just reported).
  • Dropped defects are not counted.
  • The difference in time can be calculated in hours or in days.
  • ‘fixed’ means that the defect is verified and closed; not just ‘completed’ by the developer.

Defect Age Formula

Defect Age in Time = Defect Fix Date (OR Current Date) – Defect Detection Date Normally, average age of all defects is calculated.


If a defect was detected on 01/01/2009 10:00:00 AM and closed on 01/04/2009 12:00:00 PM, the Defect Age is 74 hours.


For determining the responsiveness of the development/testing team. Lesser the age better the responsiveness.



Defect Age (in Phases) is the difference in phases between the defect injection phase and the defect detection phase.


  • ‘defect injection phase’ is the phase in the software life cycle where the defect was introduced.
  • ‘defect detection phase’ is the phase in the software life cycle where the defect was identified.

Defect Age Formula

Defect Age in Phase = Defect Detection Phase – Defect Injection Phase

Normally, average of all defects is calculated.


Let’s say the software life cycle has the following phases:

  • Requirements Development
  • High-Level Design
  • Detail Design
  • Coding
  • Unit Testing
  • Integration Testing
  • System Testing
  • Acceptance Testing

If a defect is identified in System Testing and the defect was introduced in Requirements Development, the Defect Age is 6.


For assessing the effectiveness of each phase and any review/testing activities. Lesser the age better the effectiveness.

8.1.8 Use Case Testing

In simple terms, it is a formal way of representing how a system interacts with its environment. Or in other words, it defines the activities that are performed by the users (actors) of a system to achieve certain goals.

Actors can be human or external system.

Hence use case testing is defined as a black-box test design technique in which test cases are designed to execute scenarios of use cases.

In order to understand this in detail, lets look into a familiar use case of Login Functionality.

use case testing scenarios

As represented in the figure, the login use case can have different behaviors. Generally use cases have both happy path (or Main path/Typical Path) and Alternate path. These terms even referred as ‘sunny day scenarios’ and ‘rainy day scenarios’ .

Happy path represents the typical sequence of user actions and system responses. (E.g. Successful login attempt)

However Alternate path represents errors, exceptions and less typical usage paths.(E.g: Invalid username/password combinations)

When it comes to testing, minimum coverage of a use case is to have one test case for main path and one test case for each alternative path.

Coverage percentage = No of Paths tested / total number of main and alternate paths

Applicable testing levels for Use Case Testing

  • System testing
  • Acceptance testing
  • Performance testing

The main advantage of use case testing is that it helps testers addressing the customer’s need. Because the basis for this testing is the use cases, which is nothing but the real transactions users performing. This can sometimes turns into the major weakness of this technique too. In those cases use cases may not reflect the actual usage scenarios.

8.1.9 Error Guessing

Error guessing is a technique on guessing the error which can prevail in the code. It is basically an experience based technique where the test analyst uses his / her experience to guess the problematic areas of the application. If the analyst guesses that the login page is error prone, then the testers write more detailed test cases concentrating on the login page. Testers can think of variety of combinations of data to test the login page.

To design test cases based on error guessing technique, Analyst can use the past experiences to identify the conditions. This technique can be used at any level of testing and for testing the common mistakes like:

  • Divide by zero
  • Entering blank spaces in the text fields
  • Pressing submit button without entering values.
  • Uploading files exceeding maximum limits.

Error guessing technique requires skilled and experienced tester. Following factors can be used to guess the errors:

  • Lessons learnt from past releases
  • Historical learning
  • Previous defects
  • Production tickets
  • Review checklist
  • Application UI
  • Previous test results
  • Risk reports of the application
  • Variety of data used for testing.

Though Error guessing is one of the key techniques of testing, it does not provide a full coverage of the application. It also cannot guarantee that the software has reached the expected quality benchmark. This technique should be combined with other techniques to yield better results.

8.2 White Box Testing Technique

White Box Testing is the testing of a software solution’s internal coding and infrastructure. It focuses primarily on strengthening security, the flow of inputs and outputs through the application, and improving design and usability. White box testing is also known as Clear Box testing, Open Box testing, Structural testing, Transparent Box testing, Code-Based testing, and Glass Box testing.
It is one of two parts of the “box testing” approach of software testing. Its counter-part, blackbox testing, involves testing from an external or end-user type perspective. On the other hand, Whitebox testing is based on the inner workings of an application and revolves around internal testing.
The term “whitebox” was used because of the see-through box concept. The clear box or whitebox name symbolizes the ability to see through the software’s outer shell (or “box”) into its inner workings. Likewise, the “black box” in “Black Box Testing” symbolizes not being able to see the inner workings of the software so that only the end-user experience can be tested.

Steps to Perform WBT

  • Step #1– Understand the functionality of an application through its source code. Which means that a tester must be well versed with the programming language and the other tools as well techniques used to develop the software.
  • Step #2– Create the tests and execute them.

3 Main White Box Testing Techniques:

  • Statement Coverage
  • Branch Coverage
  • Path Coverage

Note that the statement, branch or path coverage does not identify any bug or defect that needs to be fixed. It only identifies those lines of code which are either never executed or remains untouched. Based on this further testing can be focused on.

Statement coverage:

In a programming language, a statement is nothing but the line of code or instruction for the computer to understand and act accordingly. A statement becomes an executable statement when it gets compiled and converted into the object code and performs the action when the program is in a running mode.

Hence “Statement Coverage”, as the name itself suggests, it is the method of validating whether each and every line of the code is executed at least once.

Branch Coverage:

“Branch” in a programming language is like the “IF statements”. An IF statement has two branches: True and False.

So in Branch coverage (also called Decision coverage), we validate whether each branch is executed at least once.

In case of an “IF statement”, there will be two test conditions:

  • One to validate the true branch and,
  • Other to validate the false branch.

Hence, in theory, Branch Coverage is a testing method which is when executed ensures that each and every branch from each decision point is executed.

Path Coverage:

Path coverage tests all the paths of the program. This is a comprehensive technique which ensures that all the paths of the program are traversed at least once. Path Coverage is even more powerful than Branch coverage. This technique is useful for testing the complex programs.

8.2.1 Basic Path Coverage

Basis path testing, a structured testing or white box testing technique used for designing test cases intended to examine all possible paths of execution at least once. Creating and executing tests for all possible paths results in 100% statement coverage and 100% branch coverage.


	Function fn_delete_element (int value, intarray_size, intarray[])
1	int i;
location = array_size + 1;

2	for i = 1 to array_size
3	if ( array[i] == value )
4	location = i; end if;
end for;

5	for i = location to array_size
6	array[i] = array[i+1]; end for;
7	array_size --;

Steps to Calculate the independent paths

Step 1 : Draw the Flow Graph of the Function/Program under consideration as shown below:

Step 2 : Determine the independent paths.

Path	1:	1	-	2	-	5	-	7	
Path	2:	1	-	2	-	5	-	6	-	7
Path	3:	1	-	2	-	3	-	2	-	5 - 6 - 7
Path	4:	1	-	2	-	3	-	4	-	2 - 5 - 6 - 7

8.2.2 Cyclomatic Complexity

Cyclomatic complexity is a source code complexity measurement that is being correlated to a number of coding errors. It is calculated by developing a Control Flow Graph of the code that measures the number of linearly-independent paths through a program module.
Lower the Program’s cyclomatic complexity, lower the risk to modify and easier to understand.
be represented using the below formula:

Cyclomatic complexity = E - N + 2*P where,
E = number of edges in the flow graph. 
	N = number of nodes in the flow graph.
P = number of nodes that have exit points

Example :

IF A = 10 THEN 
A = B 
A = C 
Print A 
Print B 
Print C


The Cyclomatic complexity is calculated using the above control flow diagram that shows seven nodes(shapes) and eight edges (lines), hence the cyclomatic complexity is 8 – 7 + 2 = 3


Read P 
	Read Q
IF P+Q > 100 THEN
Print “Large” 
If P > 50 THEN

Print “P Large” 

Calculate statement coverage, branch coverage and path coverage.


The flow chart is-

Statement Coverage (SC):

To calculate Statement Coverage, find out the shortest number of paths following which all the nodes will be covered. Here by traversing through path 1A-2C-3D-E-4G-5H all the nodes are covered. So by traveling through only one path all the nodes 12345 are covered, so the Statement coverage in this case is 1.

Branch Coverage (BC):

To calculate Branch Coverage, find out the minimum number of paths which will ensure covering of all the edges. In this case there is no single path which will ensure coverage of all the edges at one go. By following paths 1A-2C-3D-E-4G-5H, maximum numbers of edges (A, C, D, E, G and H) are covered but edges B and F are left. To covers these edges we can follow 1A-2B-E-4F. By the combining the above two paths we can ensure of traveling through all the paths. Hence Branch Coverage is 2. The aim is to cover all possible true/false decisions.

Path Coverage (PC):

Path Coverage ensures covering of all the paths from start to end. All possible paths are-





So path coverage is 4.

Thus for the above example SC=1, BC=2 and PC=4.

Memorize these….

  • 100% LCSAJ coverage will imply 100% Branch/Decision coverage
  • 100% Path coverage will imply 100% Statement coverage
  • 100% Branch/Decision coverage will imply 100% Statement coverage
  • 100% Path coverage will imply 100% Branch/Decision coverage
  • Branch coverage and Decision coverage are same

*LCSAJ = Linear Code Sequence and Jump.

8.2.3 Control Structure Coverage

Control structure testing is a group of white-box testing methods.

  • Branch Testing
  • Condition Testing
  • Data Flow Testing
  • Loop Testing

1.0 Branch Testing

Branch testing also called Decision Testing

Definition:- “For every decision, each branch needs to be executed at least once.”

Shortcoming:- ignores implicit paths that result from compound conditionals.

Treats a compound conditional as a single statement. (We count each branch taken out of the decision, regardless which condition lead to the branch.)

This example has two branches to be executed:

IF ( a equals b) THEN statement 1
statement 2 END IF
This examples also has just two branches to be executed, despite the compound conditional:
IF ( a equals b AND c less than d ) THEN statement 1
statement 2 END IF
This example has four branches to be executed:
IF ( a equals b) THEN statement 1
IF ( c equals d) THEN statement 2
statement 3

Obvious decision statements are if, for, while, switch.

Subtle decisions are return boolean expression, ternary expressions, try-catch.
For this course you don’t need to write test cases for IOException and OutOfMemory exception.

1.1 Condition Testing

Condition testing is a test construction method that focuses on exercising the logical conditions in a program module.

Errors in conditions can be due to:

  • Boolean operator error
  • Boolean variable error
  • Boolean parenthesis error
  • Relational operator error
  • Arithmetic expression error

Definition:- “For a compound condition C, the true and false branches of C and every simple condition in C need to be executed at least once.”

Multiple-condition testing requires that all true-false combinations of simple conditions be exercised at least once. Therefore, all statements, branches, and conditions are necessarily covered.

1.2 Data Flow Testing

Selects test paths according to the location of definitions and use of variables. This is a somewhat sophisticated technique and is not practical for extensive use. Its use should be targeted to modules with nested if and loop statements.

1.3 Loop Testing

Loops are fundamental to many algorithms and need thorough testing.

There are four different classes of loops: simple, concatenated, nested, and unstructured.


Create a set of tests that force the following situations:

  • Simple Loops, where n is the maximum number of allowable passes through the loop.
    • Skip loop entirely
    • Only one pass through loop
    • Two passes through loop
    • m passes through loop where m<n.
    • (n-1), n, and (n+1) passes through the loop.
  • Nested Loops
    • Start with inner loop. Set all other loops to minimum values.
    • Conduct simple loop testing on inner loop.
    • Work outwards
    • Continue until all loops tested.
  • Concatenated Loops
    • If independent loops, use simple loop testing.
    • If dependent, treat as nested loops.
  • Unstructured loops
    • Don’t test – redesign.
public class loopdemo

privateint[] numbers = {5,-3,8,-12,4,1,-20,6,2,10};

/** Compute total of numItems positive numbers in the array
*	@paramnumItems how many items to total, maximum of 10.
*/ publicintfindTotal(intnumItems)
int total = 0;
if (numItems<= 10) { for (int count=0; count <numItems; count = count + 1) { if (numbers[count] > 0)
total = total + numbers[count];
return total;

public void testOne()
loopdemo app = new loopdemo(); 
assertEquals(0, app.findTotal(0)); 
assertEquals(5, app.findTotal(1)); 
assertEquals(5, app.findTotal(2)); 
assertEquals(17, app.findTotal(5)); 
assertEquals(26, app.findTotal(9)); 
assertEquals(36, app.findTotal(10)); 
assertEquals(0, app.findTotal(11));

8.2.4 Program Technique Coverage

Test Coverage is an important part in Software testing and Software maintenance and it is the measure of the effectiveness of the testing by providing data on different items.

Amount of testing performed by a set of test cases is called Test Coverage. By amount of testing we mean that what parts of the application program are exercised when we run a test suite. In other words, test coverage is defined as a technique which determines whether our test cases are actually covering the application code and how much code is exercised when we run those test cases. When we can count upon some things in an application and also tell whether the test cases are covering those things of application, then we can say that we measure the coverage. So basically the coverage is the coverage items that we have been able to count and see what items have been covered by the

test. The test coverage by two test cases executed can be the same but the input data of 1 test case can find a defect while the input data of 2 nd cannot. So with this we understand the 100% coverage does not mean 100% tested.

Mainly more focus is put on getting code coverage information by code based testing and requirement based testing but not much stress is put on analysing the code coverage by covering maximum items in code coverage.

Basic coverage criteria

There are a number of coverage criteria, the main ones being:

  • Function coverage – Has each function (or subroutine) in the program been called?
  • Statement coverage – Has each statement in the program been executed?
  • Branch coverage – Has each branch (also called DD-path) of each control structure (such as in if and case statements) been executed? For example, given an if statement, have both the true and false branches been executed? Another way of saying this is, has every edge in the program been executed?
  • Condition coverage (or predicate coverage) – Has each Boolean sub-expression evaluated both to true and false?

For example, consider the following C function:

int foo (int x, int y)
int z = 0;
if ((x > 0) && (y > 0))
z = x;
return z;

Assume this function is a part of some bigger program and this program was run with some test suite.

  • If during this execution function ‘foo’ was called at least once, then function coverage for this function is satisfied.
  • Statement coverage for this function will be satisfied if it was called e.g. as foo(1,1), as in this case, every line in the function is executed including z = x;.
  • Tests calling foo(1,1) and foo(0,1) will satisfy branch coverage because, in the first case, both if conditions are met and z = x; is executed, while in the second case, the first condition (x>0) is not satisfied, which prevents executing z = x;.
  • Condition coverage can be satisfied with tests that call foo(1,0) and foo(0,1). These are necessary because in the first cases, (x>0) evaluates to true, while in the second, it

evaluates false. At the same time, the first case makes (y>0) false, while the second makes it true.

Condition coverage does not necessarily imply branch coverage. For example, consider the following fragment of code:

if a and b then

Condition coverage can be satisfied by two tests:

  • a=true, b=false
  • a=false, b=true

However, this set of tests does not satisfy branch coverage since neither case will meet the if condition.

Fault injection may be necessary to ensure that all conditions and branches of exception handling code have adequate coverage during testing.

Modified condition/decision coverage

A combination of function coverage and branch coverage is sometimes also called decision coverage. This criterion requires that every point of entry and exit in the program has been invoked at least once, and every decision in the program has taken on all possible outcomes at least once. In this context the decision is a boolean expression composed of conditions and zero or more boolean operators. This definition is not the same as branch coverage, however, some do use the term decision coverage as a synonym for branch coverage.

Condition/decision coverage requires that both decision and condition coverage be satisfied. However, for safety-critical applications (e.g., for avionics software) it is often required that modified condition/decision coverage (MC/DC) be satisfied. This criterion extends condition/decision criteria with requirements that each condition should affect the decision outcome independently. For example, consider the following code:

if (a or b) and c then

The condition/decision criteria will be satisfied by the following set of tests:

  • a=true, b=true, c=true
  • a=false, b=false, c=false

However, the above tests set will not satisfy modified condition/decision coverage, since in the first test, the value of ‘b’ and in the second test the value of ‘c’ would not influence the output. So, the following test set is needed to satisfy MC/DC:

  • a=false, b=true, c=false
  • a=false, b=true, c=true
  • a=false, b=false, c=true
  • a=true, b=false, c=true

Multiple condition coverage

This criterion requires that all combinations of conditions inside each decision are tested. For example, the code fragment from the previous section will require eight tests:

  • a=false, b=false, c=false
  • a=false, b=false, c=true
  • a=false, b=true, c=false
  • a=false, b=true, c=true
  • a=true, b=false, c=false
  • a=true, b=false, c=true
  • a=true, b=true, c=false
  • a=true, b=true, c=true

Parameter value coverage

Parameter value coverage (PVC) requires that in a method taking parameters, all the common values for such parameters be considered. The idea is that all common possible values for a parameter are tested. For example, common values for a string are: 1) null, 2) empty, 3) whitespace (space, tabs, newline), 4) valid string, 5) invalid string, 6) single-byte string, 7) double-byte string. It may also be appropriate to use very long strings. Failure to test each possible parameter value may leave a bug. Testing only one of these could result in 100% code coverage as each line is covered, but as only one of seven options are tested, there is only 14.2% PVC.

Other coverage criteria

There are further coverage criteria, which are used less often:

  • Linear Code Sequence and Jump (LCSAJ) coverage a.k.a. JJ-Path coverage – has every LCSAJ/JJ-path been executed?
  • Path coverage – Has every possible route through a given part of the code been executed?
  • Entry/exit coverage – Has every possible call and return of the function been executed?
  • Loop coverage – Has every possible loop been executed zero times, once, and more than once?
  • State coverage – Has each state in a finite-state machine been reached and explored?
  • Data-flow coverage – Has each variable definition and its usage been reached and explored?

8.3 Grey Box Testing Technique

Grey Box Testing

Grey Box Testing is a technique to test the software product or application with partial knowledge of the internal workings of an application.

In this process, context specific errors that are related to web systems are commonly identified. It will increase the testing coverage by concentrating on all of the layers of any complex system.

Grey Box Testing is a software testing method, which is a combination of both White Box Testing and Black Box Testing method.

  • In White Box testing internal structure (code) is known
  • In Black Box testing internal structure (code) is unknown
  • In Grey Box Testing internal structure (code) is partially known

Grey Box Testing gives the ability to test both sides of an application, presentation layer as well as the code part. It is primarily useful in Integration Testing and Penetration Testing.

Example of Grey Box Testing: While testing websites feature like links or orphan links, if tester encounters any problem with these links, then he can make the changes straightaway in HTML code and can check in real time.

Why Grey Box Testing

Grey Box Testing is performed for the following reason,

  • It provides combined benefits of both black box testing and white box testing both
  • It combines the input of developers as well as testers and improves overall product quality
  • It reduces the overhead of long process of testing functional and non-functional types
  • It gives enough free time for developer to fix defects
  • Testing is done from the user point of view rather than designer point of view

Grey Box Testing Strategy

To perform Grey box testing, it is not necessary that the tester has the access to the source code. Test are designed based on the knowledge of algorithm, architectures, internal states, or other high – level descriptions of the program behavior.

To perform Grey box Testing-

  • It applies straight forward technique of black box testing
  • It is based on requirement test case generation, as such it presets all the conditions before the program is tested by assertion method.

Techniques used for Grey box Testing are-

  • Matrix Testing: This testing technique involves defining all the variables that exist in their programs.
  • Regression Testing: To check whether the change in the previous version has regressed other aspects of the program in the new version. It will be done by testing strategies like retest all, retest risky use cases, retest within firewall.
  • Orthogonal Array Testing or OAT: It provides maximum code coverage with minimum test cases.
  • Pattern Testing: This testing is performed on the historical data of the previous system defects. Unlike black box testing, grey box testing digs within the code and determines why the failure happened

Usually, Grey box methodology uses automated software testing tools to conduct the testing. Stubs and module drivers are created to relieve tester to manually generate the code.

Steps to perform Grey box Testing are:

  • Step 1: Identify inputs
  • Step 2: Identify outputs
  • Step 3: Identify major paths
  • Step 4: Identify Subfunctions
  • Step 5: Develop inputs for Subfunctions
  • Step 6: Develop outputs for Subfunctions
  • Step 7: Execute test case for Subfunctions
  • Step 8: Verify correct result for Subfunctions
  • Step 9: Repeat steps 4 & 8 for other Subfunctions
  • Step 10: Repeat steps 7 & 8 for other Subfunctions

The test cases for grey box testing may include, GUI related, Security related, Database related, Browser related, Operational system related, etc.

8.4 Yellow Box Testing Technique

Yellow box testing is checking against the warning messages.It is also called Warning Messages Testing.

Yellow Box Testing includes the Acceptance Testing Techniques.

It is a message level testing. yellow box testing is the validation of alert messages.

8.5 Green Box Testing Technique

Takes external perspective of the test object to derive the test cases, determine whether the system is environment friendly and also not having any social implications along with the defined set of requirements.

Green Box Testing contains the Release Testing Techniques.


The following points summarizes the topic above:

  • Black box testing is the Software testing method which is used to test the software without knowing the internal structure of code or program.
  • The main purpose of the Black Box is to check whether the software is working as per expected in requirement document & whether it is meeting the user expectations or not.
  • White Box Testing is the testing of a software solution’s internal coding and infrastructure.
  • Gray Box Testing is a technique to test the software product or application with partial knowledge of the internal workings of an application.
  • Gray Box Testing is a software testing method, which is a combination of both White Box Testing and Black Box Testing method.

Enrolled Course — Software Testing

Copyright 1999- Ducat Creative, All rights reserved.

Anda bisa mendapatkan server slot online resmi dan terpercaya tentu saja di sini. Sebagai salah satu provider yang menyediakan banyak pilihan permainan.