Quick Contact

    4.1 What is STLC?

    Just like developers follow the Software Development Life Cycle (SDLC) likewise testers also follow the Software Testing Life Cycle which is called as STLC. It is the sequence of activities carried out by the testing team from the beginning of the project till the end of the project.

    Software Testing Life Cycle is a testing process which is executed in a sequence, in order to meet the quality goals. It is not a single activity but it consists of many different activities which are executed to achieve a good quality product.

    STLC Overview

    STLC stands for Software Testing Life Cycle. STLC is a sequence of different activities performed by the testing team to ensure the quality of the software or the product:

    • STLC is an integral part of Software Development Life Cycle (SDLC). But, STLC deals only with the testing phases.
    • STLC starts as soon as requirements are defined or SRD (Software Requirement Document) is shared by stakeholders.
    • STLC provides a step-by-step process to ensure quality software.
    • In the early stage of STLC, while the software or the product is developing, the tester can analyze and define the scope of testing, entry and exit criteria and also the Test Cases. It helps to reduce the test cycle time along with better quality.

    As soon as the development phase is over, the testers are ready with test cases and start with execution. This helps to find bugs in the initial phase.

    • Requirement analysis
    • Test Planning
    • Test case development
    • Environment Setup
    • Test Execution
    • Test Cycle Closure

    Each of the step mentioned above has some Entry Criteria (it is a minimum set of conditions that should be met before starting the software testing) as well as Exit Criteria (it is a minimum set of conditions that should be completed in order to stop the software testing) on the basis of which it can be decided whether we can move to the next phase of Testing Life cycle or not.

    What is Entry and Exit Criteria?

    Entry Criteria: Entry Criteria gives the prerequisite items that must be completed before testing can begin.

    Exit Criteria: Exit Criteria defines the items that must be completed before testing can be concluded

    You have Entry and Exit Criteria for all levels in the Software Testing Life Cycle (STLC)

    4.2 Phases of STLC

    STLC has the following different phases but it is not mandatory to follow all phases. Phases are dependent on the nature of the software or the product, time and resources allocated for the testing and the model of SDLC that is to be followed.

    There are 6 major phases of STLC. So, let’s discuss what all activities and deliverable are involved in each step in detailed.

    4.2.1 Requirement Analysis

    This is the very first phase of Software testing Life cycle (STLC). In this phase testing team goes through the Requirement document with both Functional and non-functional details in order to identify the testable requirements.

    In case of any confusion the QA team may setup a meeting with the clients and the stakeholders (Technical Leads, Business Analyst, System Architects and Client etc.) in order to clarify their doubts.

    Once the QA team is clear with the requirements they will document the acceptance Criteria and get it approved by the Customers.

    Activities to be done in Requirement analysis phase are given below:
    • Analyzing the System Requirement specifications from the testing point of view
    • Preparation of RTM that is Requirement Traceability Matrix
    • Identifying the testing techniques and testing types
    • Prioritizing the feature which need focused testing
    • Analyzing the Automation feasibility
    • Identifying the details about the testing environment where actual testing will be done
    Deliverables (Outcome) of Requirement analysis phase are:
    • Requirement Traceability Matrix (RTM)
    • Automation feasibility report
    4.2.2 Test Planning

    Test Planning phase starts soon after the completion of the Requirement Analysis phase. In this phase the QA manager or QA Lead will prepare the Test Plan and Test strategy documents. As per these documents they will also come up with the testing effort estimations.

    Activities to be done in Test Planning phase are given below:
    • Estimation of testing effort
    • Selection of Testing Approach
    • Preparation of Test Plan, Test strategy documents
    • Resource planning and assigning roles and responsibility to them
    • Selection of Testing tool
    Deliverables (Outcome) of Test Planning phase are:
    • Test Plan document
    • Test Strategy document
    • Best suited Testing Approach
    • Number of Resources, skill required and their roles and responsibilities
    • Testing tool to be used
    4.2.3 Test Case Development

    Test case in simple terms refers to a documentation which specifies input, pre-conditions, set of execution steps and expected result. A good test case is the one which is effective at finding defects and also covers most of the scenarios/combinations on the system under test.

    Here is the step by step guide on how to develop test cases.

    Detailed study of the System under test
    • Before writing test cases, it is very important to have a detailed knowledge about the system which you are testing. It can be any application or any website or any software. Try and get as much information as possible through available documentation such as requirement specs, use cases, user guides, tutorials, or by having hands on the software itself.
    • Gather all the possible positive scenarios and also the odd cases which might break the system (Destructive testing) such as stress testing, uncommon combinations of inputs etc.
    Written in simple language
    • While writing test case, it is highly recommended to write in a simple and understandable language.
    • It is equally important to write your steps to the point and accurate.
    • Exact and consistent names for e.g. of forms, or fields under test must be used to avoid ambiguity.
    Test case template

    It looks like:

    Let us look at each parameters should include good test cases:

    i) Test Case ID: This field is defined by what type of system we are testing. Standard rules are as follows:

    • If we are making test case for a general application which doesn’t belong to any specific module then ID would start as TC001.
    • If we are making test cases for a module specific system then ID would start from MC001.
    • If test case has more than one expected result then we make it as version number wise. E.g. TC001.1, TC001.2 etc. All these test cases are sub part of TC001.

    In this way we can maintain all the test case IDs and if in future any requirement gets changed or added then we can just add new test cases following the standard rules without changing the test case IDs of previously written test cases.

    ii) Test Case Name: This filed can contain

    • Name of the feature you are testing
    • Requirement number from the specifications
    • Name of a particular Button or input box
    • Requirement name as classified in client’s document

    The main advantage of maintaining this field is, if a requirement gets changed in future then we can easily estimate how many test cases that change will affect and we change/remove the corresponding test cases accordingly.

    iii) Description: This field has the summary what respective test case is going to do. It explains what attribute is under test and under what condition. E.g. If a text box is under provigil online test, which allows only number and alphabets then description can be written as “Random special characters (@, #, %,$,^,*) are entered”, if we want to test a negative scenario.

    iv) Pre-Conditions: when the system needs to be in a particular base state for the function to be tested, these pre conditions should be defined clearly.

    Pre-conditions could be:

    • A certain page that a user needs to be on
    • A certain data that should be in the system
    • A certain action to be performed before “execution steps” can be executed on that particular system.

    Pre-conditions should be satisfied before the test case execution starts.

    v) Execution steps: These are the steps to be performed on the system under test to get the desired results. Steps must be defined clearly and must be accurate. They are written and executed number wise.

    vi) Expected Results: These are the desired outcomes from the execution steps performed. Expected results should be clearly defined for each step. It specifies what the specification or client expects from that particular action.

    vii) Actual result: This field has the actual outcomes after the execution steps were performed on the system under test. If the results match with the expected ones then we can just write “As
    expected”, otherwise we need to mentioned the exact result observed.

    viii) Status: This field can have following values based on the actual result we got, they are:

    • “Passed” – The expected and actual results match
    • “Failed”- The actual result and expected result do not match
    • “Not tested”- The test case has not been executed
    • “Not Applicable”-The test case does not apply to the feature any more since the requirement changed or modified
    • “Cannot be tested” – This may be because precondition is not met. There could be a defect in one of the steps leading up to the function under test.

    ix) Comments: This column is for additional information. So for e.g. if status is set to “cannot be tested” then tester can give the reason in this column.

    4.3 Software Test Management

    Software Test management, process of managing the tests. Software test management is also performed using tools to manage both types of tests, automated and manual, that have been previously specified by a test procedure.

    Software Test management tools allow automatic generation of the requirement test matrix (RTM), which is an indication of functional coverage of the application under test (SUT).Software Test Management tool often has multifunctional capabilities such as test ware management, test scheduling, the logging of results, test tracking, incident management and test reporting.

    Software Test Management Process:

    Software Test Management Process is a set of activities from the start of the testing to the end of the testing. It gives a discipline to testing. When follow a test process it gives us the plan at the initial. Test process provides the facility to plan and control the testing throughout the project cycle. It helps to track and monitor the testing throughout the project. Provides transparent of testing among stakeholders and maintains the conducted test for future reference. Affords deep level of detail of the testing that’s being carrying out. Gives clear understanding of testing activities of prior project and post project to all the stakeholders.There are many tools (Tools such as qTest, JIRA, Team Service, Test Link.) available to manage the test process.Test process can be defined and practiced differently according to the necessity in test. Explained below are the typical activities in test process.

    Test Plan:

    Test plan served as an initial sketch to carry out the testing. Testing is being tracked and monitored as per the test plan. It gives a prior picture of test challenge and aspect that will be carried out for the software. By maintaining a test plan we can manage the changes in the plan. When starting new projects, based on the lesson learned in the previous tests, test plan needs to be improved to get betterment. Test plan explains the over view of particular requirement which needs to be tested, scope, functional and non-functional requirement, risk and mitigation, testing approaches, test schedule and deliverables and schedule, out of scope and assumption, test team and allocation, test environment, test activities mechanism and any other special note for testing.

    Test Design:

    Test design affords how to implement the testing. Typically creating test cases is with inputs and expected output of the system and choosing which test cases are necessary for the execution of the test. Tester should have the clear understanding and appropriate knowledge to set the expected result. By this, coverage of the testing is defined and tester will not miss any scenario. There are two types of test design techniques one is static testing and the other one is dynamic testing. Static testing is used to test without execution mostly to artifacts like document and dynamic testing is testing by executing the system.

    Test Execution:

    Manner of executing and test the actual system result against the expected result is test execution. Test execution can be done manually and by using automation suit. During the execution tester needs to make sure, that the user’s need of the software is occupied in the
    software. Test execution is conducted by referring the document created during test design as step by step process. Tester needs to keep the track while executing the test cases.

    Exit Criteria:

    Exit criteria determines when to stop the test execution. Exit criteria is defined during the test plan phase and used in the test execution phase as a mile stone. Tester needs to set the exit criteria at the beginning, exit criteria may change during the project run as well. There are factors like client need, system stability and filled function that decide the exit criteria. Once the tester reached the exit criteria testing will be stopped. Below are some common exit criteria.

    Test Reporting:

    Test reporting gives the picture of test process and result for the particular testing cycle. To define the element in the test reporting the first thing that needs to be considered is whom the audiences of the test report are. For an example a project manager will like to see the high level picture of the testing, intermediate people will wish to view more detail and the client will expect the test reporting in the criteria such as requirement basis, feature basis. Test report is prepared and communicated periodically like daily, weekly, month etc. This needs to be sent in different stages and time.In the future project result of test reports needs to be analysed and apply the lesson learns. Test report contain elements such as test execution status, completed percentage, plan vs. executed test cases, test environment, test execution by resources, risk and mitigation if any, defect summary, test scenario and conditions, any assumption, any note etc.

    Software Test Management Responsibilities:
    • Software Test Management has a clear set of roles and responsibilities for improving the quality of the product.
    • Software Test management helps the development and maintenance of product metrics during the course of project.
    • Software Test management enables developers to make sure that there are fewer design or coding faults.
    4.4 Software Test Strategies

    Software test strategy is one of the most powerful factors in the success of the test effort and the accuracy of the test plans and estimates. This factor is under the control of the testers and test leaders.

    Let’s survey the major types of Software test strategies that are commonly found:

    • Analytical: Let us take an example to understand this. The risk-based strategy involves performing a risk analysis using project documents and stakeholder input, then planning, estimating, designing, and prioritizing the tests based on risk. Another analytical test strategy is the requirements-based strategy, where an analysis of the requirements specification forms the basis for planning, estimating and designing tests. Analytical test strategies have in common the use of some formal or informal analytical technique, usually during the requirements and design stages of the project.
    • Model-based: Let us take an example to understand this. You can build mathematical models for loading and response for e commerce servers, and test based on that model. If the behavior of the system under test conforms to that predicted by the model, the system is deemed to be working. Model-based test strategies have in common the creation or selection of some formal or informal model for critical system behaviors, usually during the requirements and design stages of the project.
    • Methodical: Let us take an example to understand this. You might have a checklist that you have put together over the years that suggests the major areas of testing to run or you might follow an industry-standard for software quality, such as ISO 9126, for your outline of major test areas. You then methodically design, implement and execute tests following this outline. Methodical test strategies have in common the adherence to a pre-planned, systematized approach that has been developed in-house, assembled from various concepts developed inhouse and gathered from outside, or adapted significantly from outside ideas and may have an early or late point of involvement for testing.
    • Process – or standard-compliant: Let us take an example to understand this. You might adopt the IEEE 829 standard for your testing, using books such as [Craig, 2002] or [Drabick, 2004] to fill in the methodological gaps. Alternatively, you might adopt one of the agile methodologies such as Extreme Programming. Process- or standard-compliant strategies have in common reliance upon an externally developed approach to testing, often with little – if any – customization and may have an early or late point of involvement for testing.
    • Dynamic: Let us take an example to understand this. You might create a lightweight set of testing guide lines that focus on rapid adaptation or known weaknesses in software. Dynamic strategies, such as exploratory testing, have in common concentrating on finding as many defects as possible during test execution and adapting to the realities of the system under test as it is when delivered, and they typically emphasize the later stages of testing.
    • Consultative or directed: Let us take an example to understand this. You might ask the users or developers of the system to tell you what to test or even rely on them to do the testing. Consultative or directed strategies have in common the reliance on a group of non-testers to guide or perform the testing effort and typically emphasize the later stages of testing simply due to the lack of recognition of the value of early testing.
    • Regression-averse: Let us take an example to understand this. You might try to automate all the tests of system functionality so that, whenever anything changes, you can re-run every test to ensure nothing has broken. Regression-averse strategies have in common a set of procedures – usually automated – that allow them to detect regression defects. A regression-averse strategy may involve automating functional tests prior to release of the function, in which case it requires early testing, but sometimes the testing is almost entirely focused on testing functions that already have been released, which is in some sense a form of post release test involvement.

    Some of these strategies are more preventive, others more reactive. For example, analytical test strategies involve upfront analysis of the test basis, and tend to identify problems in the test basis prior to test execution. This allows the early – and cheap – removal of defects. That is a strength of preventive approaches.

    Dynamic test strategies focus on the test execution period. Such strategies allow the location of defects and defect clusters that might have been hard to anticipate until you have the actual system in front of you. That is a strength of reactive approaches.

    Rather than see the choice of strategies, particularly the preventive or reactive strategies, as an either/or situation, we’ll let you in on the worst-kept secret of testing (and many other disciplines): There is no one best way. We suggest that you adopt whatever test approaches make the most sense in your particular situation, and feel free to borrow and blend.

    How do you know which strategies to pick or blend for the best chance of success? There are many factors to consider, but let us highlight a few of the most important:

    • Risks: Risk management is very important during testing, so consider the risks and the level of risk. For a well-established application that is evolving slowly, regression is an important risk, so regression-averse strategies make sense. For a new application, a risk analysis may reveal different risks if you pick a risk-based analytical strategy.
    • Skills: Consider which skills your testers possess and lack because strategies must not only be chosen, they must also be executed. . A standard compliant strategy is a smart choice when you lack the time and skills in your team to create your own approach.
    • Objectives: Testing must satisfy the needs and requirements of stakeholders to be successful. If the objective is to find as many defects as possible with a minimal amount of up-front time and effort invested – for example, at a typical independent test lab – then a dynamic strategy makes sense.
    • Regulations: Sometimes you must satisfy not only stakeholders, but also regulators. In this case, you may need to plan a methodical test strategy that satisfies these regulators that you have met all their requirements.
    • Product: Some products like, weapons systems and contract-development software tend to have well-specified requirements. This leads to synergy with a requirements- based analytical strategy.
    • Business: Business considerations and business continuity are often important. If you can use a legacy system as a model for a new system, you can use a model-based strategy.

    You must choose Software testing strategies with an eye towards the factors mentioned earlier, the schedule, budget, and feature constraints of the project and the realities of the organization and its politics.

    We mentioned above that a good team can sometimes triumph over a situation where materials, process and delaying factors are ranged against its success. However, talented execution of an unwise strategy is the equivalent of going very fast down a highway in the wrong direction.
    Therefore, you must make smart choices in terms of testing strategies.

    4.5 Software Test Planning

    A Software Test Plan is a document describing the testing scope and activities. It is the basis for formally testing any software/product in a project.A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.

    It is the main document often called as master test plan or a project test plan and usually developed during the early phase of the project.

    4.5.1 IEEE 829 test plan structure

    IEEE 829-2008, also known as the 829 Standard for Software Test Documentation, is an IEEE standard that specifies the form of a set of documents for use in defined stages of software testing, each stage potentially producing its own separate type of document.

    Let’s take a look at the different parts of the IEEE 829.

    S.No. Parameter Description
    1. Test plan identifier Unique identifying reference.
    2. Introduction A brief introduction about the project and to the document.
    3. Test items A test item is a software item that is the application under test.
    4. Features to be tested A feature that needs to tested on the test ware.
    5. Features not to be tested Identify the features and the reasons for not including as part of testing.
    6. Approach Details about the overall approach to testing.
    7. Item pass/fail criteria Documented whether a software item has passed or failed its test.
    8. Test deliverables The deliverables that are delivered as part of the testing process, such as test plans, test specifications and test summary reports.
    9. Testing tasks All tasks for planning and executing the testing.
    10. Environmental needs Defining the environmental requirements such as hardware, software, OS, network configurations, tools required.
    11. Responsibilities Lists the roles and responsibilities of the team members.
    12. Staffing and training needs Captures the actual staffing requirements and any specific skills and training requirements.
    13. Schedule States the important project delivery dates and key milestones.
    14. Risks and Mitigation High-level project risks and assumptions and a mitigating plan for each identified risk.
    15. Approvals Captures all approvers of the document, their titles and the sign off date.
    4.5.2 Software Test Planning Activities:
    • To determine the scope and the risks that need to be tested and that are NOT to be tested.
    • Documenting Test Strategy.
    • Making sure that the testing activities have been included.
    • Deciding Entry and Exit criteria.
    • Evaluating the test estimate.
    • Planning when and how to test and deciding how the test results will be evaluated, and defining test exit criterion.
    • The Test artefacts delivered as part of test execution.
    • Defining the management information, including the metrics required and defect resolution and risk issues.
    • Ensuring that the test documentation generates repeatable test assets
    4.5.3 What things to keep in mind while planning tests?
    • A good test plan is always kept short and focused. At a high level, you need to consider the purpose served by the testing work. Hence, it is really very important to keep the following things in mind while planning tests:
    • What is in scope and what is out of scope for this testing effort?
    • What are the test objectives?
    • What are the important project and product risks? (details on risks will be discussed later).
    • What constraints affect testing (e.g., budget limitations, hard deadlines, etc.)?
    • What is most critical for this product and project?
    • Which aspects of the product are more (or less) testable?
    • What should be the overall test execution schedule and how should we decide the order in which to run specific tests? (Product and planning risks, discussed later in this chapter, will influence the answers to these questions.)
    • How to split the testing work into various levels (e.g., component, integration, system and acceptance).
    • If that decision has already been made, you need to decide how to best fit your testing work in the level you are responsible for with the testing work done in those other test levels.
    • During the analysis and design of tests, you’ll want to reduce gaps and overlap between levels and, during test execution, you’ll want to coordinate between the levels. Such details dealing with inter-level coordination are often addressed in the master test plan.
    • In addition to integrating and coordinating between test levels, you should also plan to integrate and coordinate all the testing work to be done with the rest of the project. For example, what items must be acquired for the testing?
    • When will the programmers complete work on the system under test?
    • What operations support is required for the test environment?
    • What kind of information must be delivered to the maintenance team at the end of testing?
    • How many resources are required to carry out the work.
    4.5.4 What is the purpose and importance of test plans in software testing?

    Test plan is the project plan for the testing work to be done. It is not a test design specification, a collection of test cases or a set of test procedures; in fact, most of our test plans do not address that level of detail. Many people have different definitions for test plans.

    Why it is required to write test plans? We have three main reasons to write the test plans:

    First, by writing a test plan it guides our thinking. Writing a test plan forces us to confront the challenges that await us and focus our thinking on important topics.

    Second, the test planning process and the plan itself serve as the means of communication with other members of the project team, testers, peers, managers and other stakeholders.

    This communication allows the test plan to influence the project team and the project team to influence the test plan, especially in the areas of organization-wide testing policies and motivations; test scope, objectives and critical areas to test; project and product risks, resource considerations and constraints; and the testability of the item under test.

    Third, the test plan helps us to manage change. During early phases of the project, as we gather more information, we revise our plans. As the project evolves and situations change, we adapt our plans.

    By updating the plan at major milestone helps us to keep testing aligned with project needs. As we run the tests, we make final adjustments to our plans based on the results.

    You might not have the time – or the energy – to update your test plans every time a change is made in the project, as some projects can be quite dynamic.

    At times it is better to write multiple test plans in some situations.

    4.5.5 SoftwareTest Plan Types:

    One can have the following types of test plans:

    Master Test Plan: A single high-level test plan for a project/product that unifies all other test plans.

    Testing Level Specific Test Plans: Plans for each level of testing.

    • Unit Test Plan
    • Integration Test Plan
    • System Test Plan
    • Acceptance Test Plan

    Testing Type Specific Test Plans: Plans for major types of testing like Performance Test Plan and Security Test Plan.

    4.6 SDLC vs. STLC

    SDLC stands for Software Development Life Cycle. It describes the various phases involved in the software development process. The different phases of Software Development Life Cycle are-

    • Requirement Gathering
    • Designing
    • Coding/Implementation
    • Testing
    • Deployment
    • Maintenance

    Whereas, Software testing life cycle or STLC refers to all these activities performed during the testing of a software product. The different phases of Software Testing Life Cycle are-

    • Requirement Analysis
    • Test Planning
    • Test Analysis and Design
    • Test Case Development
    • Test Environment Setup
    • Test Execution
    • Exit Criteria Evaluation and Reporting
    • Test Closure

    Let us consider the following points and thereby, compare STLC and SDLC.

    • STLC is part of SDLC. It can be said that STLC is a subset of the SDLC set.
    • STLC is limited to the testing phase where quality of software or product ensures. SDLC has vast and vital role in complete development of a software or product.
    • However, STLC is a very important phase of SDLC and the final product or the software cannot be released without passing through the STLC process.
    • STLC is also a part of the post-release/ update cycle, the maintenance phase of SDLC where known defects get fixed or a new functionality is added to the software.

    As we know that development and testing are carried out parallelly. So, now let’s see the mapping between the phases of SDLC and STLC.

    The following table lists down the factors of comparison between SDLC and STLC based on their phases

    Phase SDLC STLC
    Requirement Gathering Business Analyst gathers requirements.

    Development team analyzes the requirements.

    After high level, the development team starts analyzing from the architecture and the design perspective.

    Testing team reviews and analyzes the SRD document.

    Identifies the testing requirements – Scope, Verification and Validation key points.

    Reviews the requirements for logical and functional relationship among various modules. This helps in the identification of gaps at an early stage.

    Design The architecture of SDLC helps you develop a high-level and low-level design of the software based on the requirements.

    Business Analyst works on the mocker of UI design.

    Once the design is completed, it is signed off by the stakeholders.

    In STLC, either the Test Architect or a Test Lead usually plan the test strategy.

    Identifies the testing points.

    Resource allocation and timelines are finalized here.

    Development Development team starts developing the software.

    Integrate with different systems.

    Once all integration is done, a ready to test software or product is provided.

    Testing team writes the test scenarios to validate the quality of the product.

    Detailed test cases are written for all modules along with expected behaviour.

    The prerequisites and the entry and exit criteria of a test module are identified here.

    Environment Set up Development team sets up a test environment with developed product to validate. The Test team confirms the environment set up based on the prerequisites.

    Performs smoke testing to make sure the environment is stable for the product to be tested.

    Testing The actual testing is carried out in this phase. It includes unit testing, integration testing, system testing, defect retesting, regression testing, etc.

    The Development team fixes the bug reported, if any and sends it back to the tester for retesting.

    UAT testing performs here after getting sign off from SIT testing.

    System Integration testing starts based on the test cases.

    Defects reported, if any, get retested and fixed.

    Regression testing is performed here and the product is signed off once it meets the exit criteria.

    Deployment/ Product Release Once sign-off is received from various testing team, application is deployed in prod environment for real end users. Smoke and sanity testing in production environment is completed here as soon as product is deployed.

    Test reports and matrix preparation are done by testing team to analyze the product.

    Maintenance It covers the post deployment supports, enhancement and updates, if any. In this phase, the maintaining of test cases, regression suits and automation scripts take place based on the enhancement and updates.
    4.7 Maintenance Testing (Gray Box Testing)

    Most of the tests are conducted on software during its pre-release stage, but some tests are done once the software has been released. One such procedural testing is known as Maintenance Testing.Once a system is deployed it is in service for years and decades. During this time the system and its operational environment is often corrected, changed or extended. Testing that is provided during this phase is called maintenance testing.

    Usually maintenance testing is consisting of two parts:

    • First one is, testing the changes that has been made because of the correction in the system or if the system is extended or because of some additional features added to it.
    • Second one is regression tests to prove that the rest of the system has not been affected by the maintenance work.
    Types of Maintenance Testing

    There are two basic types of Maintenance Testing. These are:

    • Confirmation Maintenance Testing: during this part of Maintenance Testing the modifications and errors are tested and retested until its execution becomes flawless. While retesting the original environment is maintained with the exact data inputs to make sure that no more errors occur and the validity of the modification or migration is confirmed without any doubts.
    • Regression Maintenance Testing: once it has been confirmed that no more errors occur in the modification, it is now time to test that unintended defects have not spread elsewhere in the software. The test conducted to find out the spreading of accidental and incidental errors is known as Regression Testing.

    When Confirmation Testing and Regression Testing are conducted strictly as per the guidelines laid down, together they form the complete Maintenance Testing ritual.

    Gray Box Testing

    It is the combination of Black Box testing and White Box testing. In Black Box testing tester is not aware of internal codes while in White Box testing internal codes of structures are known to the tester. In the Gray Box testing the tester has knowledge of some parts of internal structure.

    This involves having access to internal data structure and algorithms for the purpose of designing the test cases. Based on this limited knowledge, the test cases are designed and the tester tests the application from outside on Black Box level. The Gray Box tester treats the program as a Black Box that must be analyzed from outside.

    Gray Box testing is considered to be non-intrusive and unbiased because it does not require that the tester to have access to the internal code. The tester may know how the system components interact but does not have detailed knowledge about internal program functions and operations. A clear distinction exists between the developer and the tester, thereby minimizing the risk of personnel conflicts.

    Gray-Box testing is beneficial because it uses technique of Black-Box testing and combines it with the code-targeted systems in White-Box testing. It is called Gray Box testing because the application for tester is like a transparent box and tester can see inside it but not fully transparent & can see partially in it.

    Advantages of Gray Box Testing:
    • Offers combined benefits: As Gray-Box testing is combination of white-box and black- box testing, it serves advantages from both the testing.
    • Non-Intrusive: It is based on functional specification, architectural view whereas not on source code or binaries which makes it invasive too.
    • Unbiased Testing: Gray-Box testing maintains boundary line for testing between tester and developer avoiding conflicts.
    Drawbacks:
    • In grey-box testing, complete white box testing cannot be done due to inaccessible source code/binaries.
    • It is difficult to associate defects when we perform Grey-box testing for a distributed system.
    Gray-box testing Techniques:
    • Regression testing
    • Pattern Testing
    • Orthogonal array testing
    • Matrix testing
    Summary

    The following points summarizes the topic above:

    • Software Testing Life Cycle is a testing process which is executed in a sequence, in order to meet the quality goals.
    • Software Test management, process of managing the tests
    • Software test strategy is one of the most powerful factors in the success of the test effort and the accuracy of the test plans and estimates.
    • Software development life cycle (SDLC) and Software Testing Life cycle (STLC) go parallelly.
    • A Software Test Plan is a document describing the testing scope and activities.
    • Gray-Box testing is beneficial because it uses technique of Black-Box testing and combines it with the code-targeted systems in White-Box testing

    Enrolled Course — Software Testing

    Copyright 1999- Ducat Creative, All rights reserved.