Quick Contact

    A level of software testing is a process where every unit or component of a software/system is tested. The main goal of system testing is to evaluate the system’s compliance with the specified needs.

    There are many different testing levels which help to check behavior and performance for software testing. These testing levels are designed to recognize missing areas and reconciliation between the development lifecycle states. In SDLC models there are characterized phases such as requirement gathering, analysis, design, coding or execution, testing, and deployment.

    All these phases go through the process of software testing levels. There are mainly four testing levels are:

    • Unit Testing
    • Integration Testing
    • System Testing
    • Acceptance Testing
    Levels of Testing
    5.1 Unit Testing

    This type of testing is performed by developers before the setup is handed over to the testing team to formally execute the test cases. Unit testing is performed by the respective developers on the individual units of source code assigned areas. The developers use test data that is different from the test data of the quality assurance team.

    The goal of unit testing is to isolate each part of the program and show that individual parts are correct in terms of requirements and functionality.

    The advantage of detecting any errors in the software early in the day is that by doing so the team minimizes software development risks, as well as time and money wasted in having to go back and undo fundamental problems in the program once it is nearly completed.

    Limitations of Unit Testing

    Testing cannot catch each and every bug in an application. It is impossible to evaluate every execution path in every software application.

    There is a limit to the number of scenarios and test data that a developer can use to verify a source code. After having exhausted all the options, there is no choice but to stop unit testing and merge the code segment with other units.

    Example of Unit testing is explain below

    For example you are testing a function; whether loop or statement in a program is working properly or not than this is called as unit testing. A beneficial example of a framework that allows automated unit testing is JUNIT (a unit testing framework for java). XUnitis a more general framework which supports other languages like C#, ASP, C++, Delphi and Python to name a few.

    Tests that are performed during the unit testing are explained below:
    • Module Interface test: In module interface test, it is checked whether the information is properly flowing in to the program unit (or module) and properly happen out of it or not.
    • Local data structures: These are tested to inquiry if the local data within the module is stored properly or not.
    • Boundary conditions: It is observed that much software often fails at boundary related conditions. That’s why boundary related conditions are always tested to make safe that the program is properly working at its boundary condition’s.
    • Independent paths: All independent paths are tested to see that they are properly executing their task and terminating at the end of the program.
    • Error handling paths: These are tested to review if errors are handled properly by them or not.
    5.2 Integration Testing

    A level of the software testing process where individual units are combined and tested as a group. The purpose of this level of testing is to expose faults in the interaction between integrated units.

    Types of Integration testing
    Why Integration Test?

    We feel that Integration testing is complex and requires some development and logical skill. That’s true! Then what is the purpose of integrating this testing into our testing strategy?

    Here are some reasons:
    • In the real world, when applications are developed, it is broken down into smaller modules and individual developers are assigned 1 module. The logic implemented by one developer is quite different than another developer, so it becomes important to check whether the logic implemented by a developer is as per the expectations and rendering the correct value in accordance with the prescribed standards.
    • Many a time the face or the structure of data changes when it travels from one module to another. Some values are appended or removed, which causes issues in the later modules.
    • Modules also interact with some third party tools or APIs which also need to be tested that the data accepted by that API / tool is correct and that the response generated is also as expected.
    • A very common problem in testing: Frequent requirement change! Many a time developer deploys the changes without unit testing it. Integration testing becomes important at that time.
    Advantages

    There are several advantages of this testing and few of them are listed below.

    • This testing makes sure that the integrated modules/components work properly.
    • Integration testing can be started once the modules to be tested are available. It does not require the other module to be completed for testing to be done, as Stubs and Drivers can be used for the same.
    • It detects the errors related to the interface.
    Challenges

    Listed below are few challenges that are involved in Integration Test.

    • Integration testing means testing two or more integrated systems in order to ensure that the system works properly. Not only the integration links should be tested but an exhaustive testing considering the environment should be done to ensure that the integrated system works properly.There might be different paths and permutations which can be applied to test the integrated system.
    • Managing Integration testing becomes complex because of few factors involved in it like the database, Platform, environment etc.
    • While integrating any new system with the legacy system, it requires a lot of changes and testing efforts. Same applies while integrating any two legacy systems.
    • Integrating two different systems developed by two different companies is a big challenge as for how one of the systems will impact the other system if any changes are done in any one of the systems is not sure.

    In order to minimize the impact while developing a system, few things should be taken into consideration such as possible integration with other systems, etc.

    Integration Testing Steps:
    • Prepare Integration Test Plan.
    • Prepare integration test scenarios & test cases.
    • Prepare test automation scripts.
    • Execute test cases.
    • Report the defects.
    • Track and re-test the defects.
    • Re-testing & testing goes on until integration testing is complete.
    Types of Integration Testing

    Given below is a type of Test Integration along with its advantages and disadvantages.

    5.2.1 Big Bang Approach

    Big bang approach integrates all the modules in one go i.e. it does not go for integrating the modules one by one. It verifies if the system works as expected or not once integrated. If any issue is detected in the completely integrated module, then it becomes difficult to find out which module has caused the issue.

    Big bang approach is a time-consuming process of finding a module which has a defect itself as that would take time and once the defect is detected, fixing the same would cost high as the defect is detected at the later stage.

    Below is the pictorial representation of Big Bang Approach.

    Advantages of Big Bang approach:
    • It is a good approach for small systems.
    Disadvantages of Big Bang Approach:
    • • It is difficult to detect the module which is causing an issue.
    • • Big Bang approach requires all the modules all together for testing, which in turn, leads to less time for testing as designing, development, Integration would take most of the time.
    • • Testing takes place at once only which thereby leaves no time for critical module testing in isolation.
    5.2.2 Test Integration Approaches

    There are fundamentally 2 approaches for doing test integration:

    1. Bottom-up approach
    2. Top-down approach.

    Let’s consider the below figure to test the approaches:

    Bottom-up approach:

    Bottom-up testing, as the name suggests starts from the lowest or the innermost unit of the application, and gradually moves up. The Integration testing starts from the lowest module and gradually progresses towards the upper modules of the application. This integration continues till all the modules are integrated and the entire application is tested as a single unit.

    In this case, modules B1C1, B1C2 & B2C1, B2C2 are the lowest module which is unit tested. Module B1 & B2 are not yet developed. The functionality of Module B1 and B2 is that it calls the modules B1C1, B1C2 & B2C1, B2C2. Since B1 and B2 are not yet developed, we would need some program or a “stimulator” which will call the B1C1, B1C2 & B2C1, B2C2 modules. These stimulator programs are called DRIVERS.

    In simple words, DRIVERS are the dummy programs which are used to call the functions of the lowest module in a case when the calling function does not exist. The bottom-up technique requires module driver to feed test case input to the interface of the module being tested.

    The advantage of this approach is that, if a major fault exists at the lowest unit of the program, it is easier to detect it, and corrective measures can be taken.

    The disadvantage is that the main program actually does not exist until the last module is integrated and tested. As a result, the higher level design flaws will be detected only at the end.

    Top-down approach

    This technique starts from the topmost module and gradually progress towards the lower modules. Only the top module is unit tested in isolation. After this, the lower modules are integrated one by one. The process is repeated until all the modules are integrated and tested.

    In the context of our figure, testing starts from Module A, and lower modules B1 and B2 are integrated one by one. Now here the lower modules B1 and B2 are not actually available for integration. So in order to test the topmost modules A, we develop “STUBS”.

    “Stubs” can be referred to as code a snippet which accepts the inputs/requests from the top module and returns the results/ response. This way, in spite of the lower modules, do not exist, we are able to test the top module.

    In practical scenarios, the behavior of stubs is not that simple as it seems. In this era of complex modules and architecture, the called module, most of the time involves complex business logic like connecting to a database. As a result, creating Stubs becomes as complex and time taking as the real module. In some cases, Stub module may turn out to be bigger than the stimulated module.

    Both Stubs and drivers are dummy piece of code which is used for testing the “non- existing” modules. They trigger the functions/method and return the response, which is compared to the expected behavior

    Let’s conclude some difference between Stubs and Driver:

    Stubs Driver
    Used in Top down approach Used in Bottom up approach
    Top most module is tested first Lowest modules are tested first.
    Stimulates the lower level of components Stimulates the higher level of components
    Dummy program of lower level components Dummy program for Higher level component

    The only change is Constant in this world, so we have another approach called “Hybrid Approach” which combines the features of both Top-down and bottom-up approach.

    In Hybrid Integration Testing, we exploit the advantages of Top-down and Bottom-up approaches. As the name suggests, we make use of both the Integration techniques.

    Features of Hybrid Integration Testing
    • It is viewed as three layers; viz – The Main Target Layer, a layer above the target layer and a layer below the target layer.
    • Testing is mainly focussed for the middle level target layer and is selected on the basis of system characteristics and the structure of the code.
    • Hybrid Integration testing can be adopted if the customer wants to work on a working version of the application as soon as possible aimed at producing a basic working system in the earlier stages of the development cycle.
    Integration Testing Example

    For example, you have to test the keyboard of a computer than it is a unit testing but when you have to combine the keyboard and mouse of a computer together to see its working or not than it is the integration testing. So it is prerequisite that for performing integration testing a system must be unit tested before.

    Difference between Unit Testing and Integration Testing
    Unit Testing Integration Testing
    1 It do not occurs after and before of anything. It occurs after Unit Testing and before System Testing.
    2 It is not abbreviated by any name. It is abbreviated as “I&T” that is why sometimes also called Integration and Testing.
    3 It is not further divided into any. It is further divided into Top-down Integration, Bottom-Up Integration and so on.
    4 It may not catch integration errors, or other system-wide issues because unit testing only tests the functionality of the units themselves. Integration testing uncovers an error that arises when modules are integrated to build the overall system.
    5 The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. The goal of Integration Testing is to combined modules in the application and tested as a group to see that they are working fine.
    6 It does not follow anything. It follows unit testing and precedes system testing.
    7 It obviously starts from the module specification. It obviously starts from the interface specification.
    8 Unit testing always tests the visibility of code in details. Integration testing always tests the visibility of the integration structure.
    9 It requires complex scaffolding means frame. It requires some scaffolding means frame.
    10 It definitely pays attention to the behavior of single modules. It definitely pays attention to the integration among modules.
    11 It is only the kind of White Box Testing. It is both the kind of Black Box and White Box Testing.
    5.3 System Testing

    System Testing (ST) is a black box testing technique performed to evaluate the complete system the system’s compliance against specified requirements. In System testing, the functionalities of the system are tested from an end-to-end perspective.

    System Testing is usually carried out by a team that is independent of the development team in order to measure the quality of the system unbiased. It includes both functional and Non- Functional testing.

    System testing is important because of the following reasons:
    • System testing is the first step in the Software Development Life Cycle, where the application is tested as a whole.
    • The application is tested thoroughly to verify that it meets the functional and technical specifications.
    • The application is tested in an environment that is very close to the production environment where the application will be deployed.
    • System testing enables us to test, verify, and validate both the business requirements as well as the application architecture.
    Types of System Testing:
    5.3.1 Usability Testing

    Usability Testing is a type of software testing where, a small set of target end-users, of a software system, “use” it to expose usability defects. This testing mainly focuses on the user’s ease to use the application, flexibility in handling controls and ability of the system to meet its objectives. It is also called User Experience Testing.

    This testing is recommended during the initial design phase of SDLC, which gives more visibility on the expectations of the users. It is difficult to evaluate and measure but can be evaluated based on the below parameters:

    • Level of Skill required to learn/use the software. It should maintain the balance for both novice and expert user.
    • Time required to get used to in using the software.
    • The measure of increase in user productivity if any.
    • Assessment of a user’s attitude towards using the software.
    Usability Testing Process

    Usability testing process consists of the following phases

    Planning: During this phase the goals of usability test are determined. Having volunteers sit in front of your application and recording their actions is not a goal. You need to determine critical functionalities and objectives of system. You need to assign tasks to your testers, which exercise these critical functionalities. During this phase, usability testing method, number & demographics of usability testers, test report formats are also determined

    Recruiting: During this phase, you recruit the desired number of testers as per your usability test plan. Finding testers who match your demographic (age, sex etc.) and professional (education, job etc.) profile can take time.

    Usability Testing: During this phase, usability tests are actually executed.

    Data Analysis: Data from usability tests is thoroughly analyzed to derive meaningful inferences and give actionable recommendations to improve overall usability of your product.

    Reporting: Findings of the usability test is shared with all concerned stakeholders which can include designer, developer, client, and CEO.

    User interface testing

    User interface testing, a testing technique used to identify the presence of defects is a product/software under test by using Graphical user interface [GUI].

    Characteristics of GUI Testing:
    • GUI is a hierarchical, graphical front end to the application, contains graphical objects with a set of properties.
    • During execution, the values of the properties of each objects of a GUI define the GUI state.
    • It has capabilities to exercise GUI events like key press/mouse click.
    • Able to provide inputs to the GUI Objects.
    • To check the GUI representations to see if they are consistent with the expected ones.
    • It strongly depends on the used technology.
    Approaches of GUI Testing:
    • Manual Based: Based on the domain and application knowledge of the tester.
    • Capture and Replay: Based on capture and replay of user actions.
    • Model-based testing: Based on the execution of user sessions based on a GUI model. Various GUI models are briefly discussed below.
    Model Based Testing In Brief:
    • Event-based model: Based on all events of the GUI need to be executed at least once.
    • State-based model: “all states” of the GUI are to be exercised at least once.
    • Domain model: Based on the application domain and its functionality.
    Manual support testing

    Manual support testing includes first the evaluation of the process that is whether the process is acceptable or not and then second the execution of the process. This Manual support testing is the testing of the manual functions.

    Manual support testing is similar to an examination in which the user is asked to provide the answer based on the rules and procedures available to them.

    There are some objectives of manual support testing as follows:

    • Verify that the manual support procedures are documented and complete.
    • Determine that the responsibility of the manual support has been assigned.
    • Determine that the people working in manual support are adequately trained.
    • Determine that the manual support and the automated segment are properly interfaced.

    The following are the steps to be followed while performing the manual testing:

    • The manual tester should firstly understand the functionality of the program.
    • Then he should prepare the environment of testing.
    • Prepare the test cases and execute them manually.
    • Observe and verify the actual result.
    • Record the result as pass or fail.
    When to use manual support testing?

    Verification of the Manual support testing should be done during the project life cycle. The system testing should not be kept for the later stages of the life cycle. This testing is best done during the installation phase.

    5.3.2 Functional Testing

    Functional Testing is a testing technique that is used to test the features/functionality of the system or Software, should cover all the scenarios including failure paths and boundary cases.

    Functional Testing Techniques:

    There are two major Functional Testing techniques as shown below:

    Functionality testing is performed to verify that a software application performs and functions correctly according to design specifications. During functionality testing we check the core

    application functions, text input, menu functions and installation and setup on localized machines, etc.

    The following is needed to be checked during the functionality testing:

    • Installation and setup on localized machines running localized operating systems and local code pages.
    • Text input, including the use of extended characters or non-Latin scripts.
    • Core application functions.
    • String handling, text, and data, especially when interfacing with non-Unicode applications or modules.
    • Regional settings defaults.
    • Text handling (such as copying, pasting, and editing) of extended characters, special fonts, and non-Latin scripts.
    • Accurate hot-key shortcuts without any duplication.

    Functionality testing verifies that an application is still fully functional after localization. Even applications which are professionally internationalized according to world-readiness guidelines require functionality testing.

    >

    Functional Testing Process:

    In order to functionally test an application, following steps must be observed.

    • Understand the Requirements
    • Identify test input (test data)
    • Compute the expected outcomes with the selected test input values
    • Execute test cases
    • Comparison of actual and computed expected result
    5.3.2.1 GUI/Behavioural Coverage

    GUI testing is a testing technique in which the application’s user interface is tested whether the application performs as expected with respect to user interface behaviour.

    GUI Testing includes the application behaviour towards keyboard and mouse movements and how different GUI objects such as toolbars, buttons, menubars, dialog boxes, edit fields, lists, behavior to the user input. GUI Testing Checklist:

    • Check Screen Validations
    • Verify All Navigations
    • Check usability Conditions
    • Verify Data Integrity
    • Verify the object states
    • Verify the date Field and Numeric Field Formats
    GUI Automation Tools

    Following are some of the open source GUI automation tools in the market:

    Product Licensed Under URL
    AutoHotkey GPL http://www.autohotkey.com/
    Selenium Apache http://docs.seleniumhq.org/
    Sikuli MIT http://sikuli.org
    Robot Framework Apache www.robotframework.org
    Water BSD http://www.watir.com/
    Dojo Toolkit BSD http://dojotoolkit.org/

    Following are some of the Commercial GUI automation tools in the market.

    Product Vendor URL
    AutoIT AutoIT http://www.autoitscript.com/site/autoit/
    EggPlant TestPlant www.testplant.com
    QTP Hp http://www8.hp.com/us/en/software-solutions
    Rational Functional Tester IBM http://www- 03.ibm.com/software/products/us/en/functional
    Infragistics Infragistics www.infragistics.com
    iMacros IOpus http://www.iopus.com/iMacros/
    CodedUI Microsoft http://www.microsoft.com/visualstudio/
    Sikuli Micro Focus International http://www.microfocus.com/
    5.3.2.2 Input Domain Coverage

    Domain Testing is a type of Functional Testing which tests the application by giving inputs and evaluating its appropriate outputs. It is a software testing technique in which the output of a system has to be tested with a minimum number of inputs in such a case to ensure that the system does not accept invalid and out of range input values.

    One of the most important White Box Testing method is a domain testing. The main goal of the Domain testing is to check whether the system accepts the input within the acceptable range and delivers the required output. Also, it verifies the system should not accept the inputs, conditions and indices outside the specified or valid range.

    Domain might involve testing of any one input variable or combination of input variables.

    5.3.2.3 Error Handling Coverage

    Error handling refers to the anticipation, detection, and resolution of programming, application, and communications errors. Specialized programs, called error handlers, are available for some applications. The best programs of this type forestall errors if possible, recover from them when they occur without terminating the application, or (if all else fails) gracefully terminate an affected application and save the error information to a log file.

    In programming, a development error is one that can be prevented. Such an error can occur in syntax or logic. Syntax errors, which are typographical mistakes or improper use of special characters, are handled by rigorous proofreading. Logic errors, also called bugs, occur when executed code does not produce the expected or desired result. Logic errors are best handled by meticulous program debugging. This can be an ongoing process that involves, in addition to the traditional debugging routine, beta testing prior to official release and customer feedback after official release.

    A run-time error takes place during the execution of a program, and usually happens because of adverse system parameters or invalid input data. An example is the lack of sufficient memory to run an application or a memory conflict with another program. On the Internet, run-time errors can result from electrical noise, various forms of malware or an exceptionally heavy demand on a server. Run-time errors can be resolved, or their impact minimized, by the use of error handler programs, by vigilance on the part of network and server administrators, and by reasonable security countermeasures on the part of Internet users.

    Usage
    • It determines the ability of applications system to process the incorrect transactions properly
    • Errors encompass all unexpected conditions.
    • In some system approx. 50% of programming effort will be devoted to handling error condition.
    Objective
    • Determine Application system recognizes all expected error conditions.
    • Determine Accountability of processing errors has been assigned and procedures provide a high probability that errors will be properly corrected.
    • Determine During correction process reasonable control is maintained over errors.
    How to Use?
    • A group of knowledgeable people is required to anticipate what can go wrong in the application system.
    • It is needed that all the application knowledgeable people assemble to integrate their knowledge of user area, auditing and error tracking.
    • Then logical test error conditions should be created based on this assimilated information.
    When to Use?
    • Throughout SDLC.
    • Impact from errors should be identified and should be corrected to reduce the errors to acceptable level.
    • Used to assist in error management process of system development and maintenance.
    Example
    • Create a set of erroneous transactions and enter them into the application system then find out whether the system is able to identify the problems.
    • Using iterative testing enters transactions and trap errors. Correct them. Then enter transactions with errors, which were not present in the system earlier.
    5.3.2.4 Manipulation Coverage

    The Correctness of output or outcomes. It means returning correct output values. In this testing it is checked that when user enters the input then the output provided to user is correct or not. Manipulation coverage can also be referred as calculation related testing. It is very important and useful testing. Manipulation coverage is the process of evaluating the final product to check whether the software meets the customer expectations and requirements. It reduces the chances of failures in the software application or product.

    When conducting manipulation coverage, you typically need to follow a process that looks something like this:

    • Determine what the expected outcome should be based on those inputs
    • Use test data to identify outputs
    • Run the test cases with the proper inputs
    • Compare the expected results to the actual results
    5.3.2.5 Order of Functionality

    The existence of functionality with respect to customer requirements.

    5.3.2.6 Backend coverage

    Backend coverage also known as database testing. Database Testing is checking the schema, tables, triggers, etc. of the database under test. It may involve creating complex queries to load/stress test the database and check its responsiveness. It Checks data integrity and consistency.

    Database testing basically include the following:

    • Data validity testing: For doing data validity testing one should be good in SQL queries.
    • Data Integrity testing: For data integrity testing one should know about referential integrity and different constraint.
    • Performance related to data base: For performance related things one should have idea about the table structure and design.
    • Testing of Procedure, triggers and functions: For testing Procedure triggers and functions one should be able to understand the same.
    5.3.2.7 Smoke Testing

    Smoke testing is a type of software testing which ensures that the major functionalities of the application are working fine. This testing is also known as ‘Build Verification testing’. It is a non- exhaustive testing with very limited test cases to ensure that the important features are working fine and we are good to proceed with the detailed testing.

    The term ‘smoke’ testing is originated from the hardware testing, where a device when first switched on is tested for the smoke or fire from its components. This ensures that the
    hardware’s basic components are working fine and no serious failures are found.

    Similarly, when we do smoke testing of an application then this means that we are trying to ensure that there should NOT be any major failures before giving the build for exhaustive testing.

    • The purpose of the smoke testing is to ensure that the critical functionalities of an application are working fine.
    • This is a non-exhaustive testing with very limited number of test cases.
    • It is also known as Build verification testing where the build is verified by testing the important features of the application and then declaring it as good to go for further detailed testing.
    • Smoke testing can be done by developers before releasing the build to the testers and post this it is also tested by the testing team to ensure that the build is stable enough to perform the detailed testing.
    • Usually smoke testing is performed with positive scenarios and with valid data.
    • It is a type of shallow and wide testing because it covers all the basic and important functionalities of an application.
    • Usually the smoke testing is documented.
    • Smoke testing is like a normal health check up of the build of an application.
    Examples for Smoke Testing

    Let us assume that there is an application like ‘Student Network’ which has 15 modules. Among them, there are 4 important components like Login page, Adding student details, Updating it and Deleting it. As a part of smoke testing we will test the login page with valid input. After login we

    will test the addition, updating and deletion of records. If all the 4 critical components work fine then the build is stable enough to proceed with detailed testing. This is known as Smoke testing.

    How smoke testing works

    Quality assurance (QA) testers perform smoke testing after the developers deliver every new build of an application. If the code passes the smoke, the software build moves on to more rigorous tests, such as unit and integration tests. If the smoke test fails, then the testers have discovered a major flaw that halts all further tests. QA then asks developers to send another build. This one broad initial test is a more effective strategy to improve software code than if the team conducted specific and rigorous tests this early in the development process.

    Smoke testing is also performed from the perspective of user experience (UX). This approach includes testing key functionalities, such as if the build is accessible or if the user interface (UI) and login mechanism function. Other key functionalities incude if an action selection correlates with the intended action. For example, if a user adds an item to a shopping cart in an e- commerce web application, does the item appear in the cart?

    When to use smoke testing
    • Smoke testing is used in the following scenarios:
    • It is done by developers before giving build to the testing team.
    • It is done by the testers before they start the detailed testing.
    • Smoke testing is done to ensure that the basic functionalities of the application are working fine.
    Advantages of Smoke testing
    • It helps in finding the bugs in the early stage of testing.
    • It helps in finding the issues that got introduced by the integration of components.
    • It helps in verifying the issues fixed in the previous build are NOT impacting the major functionalities of the application.
    • Very limited number of test cases is required to do the smoke testing.
    • Smoke testing can be carried out in small time.
    Disadvantages of Smoke testing
    • Smoke testing does not cover the detailed testing.
    • It’s a non-exhaustive testing with small number of test cases because of which we not are able to find the other critical issues.

    Smoke testing is not performed with negative scenarios and with invalid data.

    Features of Smoke Testing:
    • Identifying the business-critical functionalities that a product must satisfy.
    • Designing and executing the basic functionalities of the application.
    • Ensuring that the smoke test passes each and every build in order to proceed with the testing.
    • Smoke Tests enables uncovering obvious errors which saves time and effort of test team.
    • Smoke Tests can be manual or automated.
    5.3.2.8 Sanity Testing

    Sanity testing, a software testing technique performed by the test team for some basic tests. The aim of basic test is to be conducted whenever a new build is received for testing. The terminologies such as Smoke Test or Build Verification Test or Basic Acceptance Test or Sanity Test are interchangeably used, however, each one of them is used under a slightly different scenario.

    Sanity testing is the subset of regression testing and it is performed when we do not have enough time for doing testing.

    Sanity testing is the surface level testing where QA engineer verifies that all the menus, functions, commands available in the product and project are working fine

    Sanity test is usually unscripted, helps to identify the dependent missing functionalities. It is used to determine if the section of the application is still working after a minor change.

    Sanity testing can be narrow and deep. Sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually performed when any minor bug is fixed or when there is a small change in the functionality. It is a kind of software testing which is done by the testers to ensure that the functionality is working as expected.

    Let’s take Example of sanity Testing

    For example, in a project there are 5 modules: login page, home page, user’s details page, new user creation and task creation.

    Suppose we have a bug in the login page: the login page’s username field accepts usernames which are shorter than 6 alphanumeric characters, and this is against the requirements, as in the requirements it is specified that the username should be at least 6 alphanumeric characters.

    Now the bug is reported by the testing team to the developer team to fix it. After the developing team fixes the bug and passes the app to the testing team, the testing team also checks the other modules of the application in order to verify that the bug fix does not affect the functionality of the other modules. But keep one point always in mind: the testing team only checks the extreme functionality of the modules; it does not go deep to test the details because of the short time

    Sanity testing is performed when the development team needs to know quickly the state of the product after they have done changes in the code, or there is some controlled code changed in a feature to fix any critical issue, and stringent release time-frame does not allow complete regression testing.

    Few points about Sanity testing:
    • It is a surface level testing which follows narrow and deep approach concentrating on the detailed testing of some limited features
    • In sanity testing the testers verifies the commands and functions and all the menus in the product
    • It is a subset of regression testing
    • It is performed when we do not have enough time for detailed testing
    • Sanity testing is usually not scripted
    • Sanity testing is brief or quick testing in order to ensure that the changes are working as expected and as per the specification documents
    • Sanity testing checks the minor bug fixes and the functionality changes are working at the same time it also ensures that the related functionality is intact.
    Advantages of Sanity testing:
    • It saves lots of time and effort because Sanity testing is focused on one or few areas of functionality
    • There is no effort put in towards it’s documentation because it’s usually unscripted
    • It helps in identifying the dependent missing objects.
    • It is used to verify that a small functionality of the application is still working fine after a minor change.
    Disadvantages of Sanity testing:
    • Sanity testing focuses only on the commands and functions of the software
    • It does not go to the design structure level so it’s very difficult for the developers to understand as how to fix the issues found during the sanity testing
    • In sanity testing the testing is performed only for some limited features so if there is any issue in other functionalities then it will be difficult to catch them
    • Sanity testing is usually unscripted so future references are not available
    Difference between Smoke testing and Sanity testing:
    Smoke testing Sanity testing
    The main objective of Smoke testing is to verify the stability of the entire system The main objective of Sanity testing is to verify the rationality of the system
    Smoke testing is executed to ensure that the basic functionalities are working as expected Sanity testing is executed to verify that the new functionality or the bug fixes are working as expected
    Smoke testing is wide and shallow approach Sanity testing is narrow and deep approach
    Smoke testing is usually scripted or documented Sanity testing is usually unscripted
    Smoke testing is performed by the testers and it can also be performed by the developers Sanity testing is usually performed by the testers
    Smoke testing is like general health checkup of the software Sanity testing is like a specialized health checkup of the software
    Smoke testing is performed earlier Whereas sanity testing is performed after the Smoke testing
    5.3.3 Non-Functional Testing

    Non-Functional testing is a software testing technique that verifies the attributes of the system such as memory leaks, performance or robustness of the system. Non-Functional testing is performed at all test levels.

    Objectives of Non-functional testing
    • • Non-functional testing should increase usability, efficiency, maintainability, and portability of the product.
    • • Helps to reduce production risk and cost associated with non-functional aspects of the product.
    • • Optimize the way product is installed, setup, executes, managed and monitored.
    • • Collect and produce measurements, and metrics for internal research and development.
    • • Improve and enhance knowledge of the product behavior and technologies in use.
    Characteristics of Non-functional testing
    • Non-functional testing should be measurable, so there is no place for subjective characterization like good, better, best, etc.
    • Exact numbers are unlikely to be known at the start of the requirement process
    • Important to prioritize the requirements
    • Ensure that quality attributes are identified correctly
    Non-functional Testing Types
    • Performance Testing
    • Load Testing
    • Failover Testing
    • Security Testing
    • Compatibility Testing
    • Usability Testing
    • Stress Testing
    • Maintainability Testing
    • Scalability Testing
    • Volume Testing
    • Disaster Recovery Testing
    • Compliance Testing
    • Portability Testing
    • Efficiency Testing
    • Reliability Testing
    • Baseline Testing
    • Endurance Testing
    • Documentation Testing
    • Recovery Testing
    • Internationalization Testing
    • Localization Testing
    Let’s discuss some of the most common non-functional testing types below:
    Performance Testing

    Performance testing is testing that is performed, to determine how fast some aspect of a system performs under a particular workload. It can serve different purposes like it can demonstrate that the system meets performance criteria. It can compare two systems to find which performs better. Or it can measure what part of the system or workload causes the system to perform badly.

    In other words, First and foremost type of non-functional testing is performance testing. In order to ensure that the response time of a system is acceptable, performance testing is carried out. By setting up a considerable load and a production-sized database, the system is tested for response times of several business-critical processes.

    Load Testing

    Types of non-functional testing in software testing also includes load testing. To check whether the system can sustain the pressure or load of many users accessing the system at one time, load testing needs to be carried out.

    In other words, A load test is usually conducted to understand the behavior of the application under a specific expected load. Load testing is performed to determine a system’s behavior under both normal and at peak conditions. It helps to identify the maximum operating capacity of an application as well as any bottlenecks and determine which element is causing degradation. E.g. If the number of users are increased then how much CPU, memory will be consumed, what is the network and bandwidth response time.

    Failover Testing

    Failover testing is a testing technique that validates a system’s ability to be able to allocate extra resource and to move operations to back-up systems during the server failure due to one or the other reasons. This determines if a system is capable of handling extra resource such as additional CPU or servers during critical failures or at the point the system reaches a performance threshold.

    Example:

    Failover testing is very much critical for the following types of applications:

    • anking Application
    • Financial Application
    • Telecom Application
    • Trading Platforms
    Security Testing

    Security testing is basically to check that whether the application or the product is secured or not. Can anyone come tomorrow and hack the system or login the application without any authorization. It is a process to determine that an information system protects data and maintains functionality as intended.

    Compatibility Testing

    Compatibility testing is basically the testing of the application or the product built with the computing environment. It tests whether the application or the software product built is compatible with the hardware, operating system, database or other system software or not.

    Key points are:

    • Test with each hardware with minimum and maximum configuration.
    • Test with different browsers.
    • Test cases are the same which were executed during functional testing.
    • In case the number of hardware and software are too many, we can use OATS techniques to arrive at the test cases to have maximum coverage.
    Usability Testing

    Usability Testing is a type of software testing where, a small set of target end-users, of a software system, “use” it to expose usability defects. This testing mainly focuses on the user’s ease to use the application, flexibility in handling controls and ability of the system to meet its objectives. It is also called User Experience Testing.

    This testing is recommended during the initial design phase of SDLC, which gives more visibility on the expectations of the users. It is difficult to evaluate and measure but can be evaluated based on the below parameters:

    • Level of Skill required to learn/use the software. It should maintain the balance for both novice and expert user.
    • Time required to get used to in using the software.
    • The measure of increase in user productivity if any.
    • Assessment of a user’s attitude towards using the software.
    • Stress Testing
    • It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. It is a form of testing that is used to determine the stability of a given system. It put greater emphasis on robustness, availability, and error handling under a heavy load, rather than on what would be considered correct behavior under normal circumstances. The goals of such tests may be to ensure the software does not crash in conditions of insufficient computational resources (such as memory or disk space).
    Maintainability Testing

    The term maintainability corresponds to the ability to update or modify the system under test. This is a very important parameter as the system is subjected to changes throughout the software life cycle.

    To make Maintainability Testing more effective, testers should include static analysis and reviews as these are hard to spot during dynamic testing while it is easily captured in code walkthrough and inspection.

    Maintainability Testing Checklist:
    • Verifying the development standards such as structured programming, standards for database approach, recognizable nomenclature and standards for the user interfaces
    • Verify if the data processing split up into subtransactions?
    • Verify if the input, the processing and the output have been implemented separately
    • Verify if the programs have been parameterized under necessary conditions to promote reusuability.
    • Verify if the systems are distributed.
    • Verify if the algorithms are optimized.
    Scalability Testing

    With the changes in requirement and growing need, the system must adapt and work accordingly. Scalability testing ensures that the system meets the growing need when there is any change in terms of size and volume, making the system, process, or network to function well. In simple words, scalability testing checks the capability of the system, process, and databases to match the increasing need and ensure that the application is able to handle to change in terms of traffic, data, and transactions.

    Scalability testing is a kind of performance testing, where the focus remains on application’s behaviour while testing it under excessive load. The main aim of performing scalability testing is to identify the point where the application stops adapting or responding to the changes and the reasons behind it. Some of the reasons for performing scalability testing are mentioned below.

    • It tells you about your application’s behaviour with the increasing load
    • Determines the limitation of the web application in terms of users
    • Determines the end user experience under load
    • Determines the robustness of the server
    Volume Testing

    Volume testing is a non-functional Performance Testing, where the software is subjected to a huge volume of data. It is also referred as flood testing.

    Volume testing is done to analyze the system performance by increasing the volume of data in the database.

    With the help of Volume testing, the impact on response time and system behavior can be studied when exposed to a high volume of data.

    For example, testing a music site behavior when there are millions of user to download the song.

    Benefits of Volume Testing
    • By identifying load issues, a lot of money can be saved which otherwise will be spent on application maintenance.
    • It helps in quicker start for scalability plans
    • Early identification of bottlenecks
    • It assures your system is now capable of real world usage
    Disaster Recovery testing

    A disaster recovery test (DR test) is the examination of each step in a disaster recovery plan as outlined in an organization’s business continuity/disaster recovery (BCDR) planning process.

    Disaster recovery testing helps ensure that an organization can really recover data, restore business critical applications and continue operations after an interruption of services. In many organizations, however, DR testing is neglected simply because creating a plan for disaster recovery can tie up resources and the plan itself, once completed, is seen as the solution.

    If an organization doesn’t invest time and resources into testing its disaster recovery plan,
    however, there’s a very real chance that the plan will fail to execute as expected when it’s really needed. Communications, data recovery and application recovery are typically a focus of all disaster recovery testing. Other areas for testing vary, depending on the organization’s recovery point (RPO) and recovery time (RTO) objectives.

    Disaster recovery tests should be conducted on a regular basis throughout the year and be incorporated into all planned maintenance and staff training. Once a test has been
    completed, audit logs and other data should be analyzed to determine what worked as expected, what didn’t work as expected, what changes need to be made to the DR plan’s design and what tasks need to be scheduled for re-testing.

    Compliance Testing

    Compliance testing also know as Conformance testing is a nonfunctional testing technique which is done to validate, weather the system developed meets the organization’s prescribed standards or not.

    There is a separate category of testing known as “Non Functional Testing”.

    Nonfunctional testing, as the name suggests, focuses on the nonfunctional features of the software. These nonfunctional features (which are not limited to) can include the below points:

    • Load testing
    • Stress Testing
    • Volume Testing
    • Compliance testing
    • Operations Testing
    • Documentation Testing
    Portability Testing

    Portability testing is a process of testing with ease with which the software or product can be moved from one environment to another. It is measured in terms of maximum amount of effort required to transfer from one system to another system.

    The portability testing is performed regularly throughout the software development life cycle in an iterative and incremental manner.

    Portability Testing attributes:

    Following are the attributes of the portability Testing:

    • Adaptability
    • Installability
    • Replaceability
    • Co-existence
    Portability Testing Checklists:
    • Verify if the application is able to fulfil the portability requirements.
    • Determine the look and feel of the application in the various browser types and various browser versions.
    • Report the defects to the development teams so that they can be associated and defects can be fixed.
    • The failures during the portability testing can help to identify defects that were not detected during unit and integration testing.
    Efficiency testing

    Efficiency testing test the amount of code and testing resources required by a program to perform a particular function. Software Test Efficiency is number of test cases executed divided by unit of time (generally per hour).

    It is internal in the organization how much resources were consumed how much of these resources were utilized.

    Here are some formulas to calculate Software Test Efficiency (for different factors):

    Test efficiency = (total number of defects found in unit+integration+system) / (total number of defects found in unit+integration+system+User acceptance testing)

    Testing Efficiency = (No. of defects Resolved / Total No. of Defects Submitted)* 100

    Software Test Effectiveness covers three aspects:
    • How much the customer’s requirements are satisfied by the system.
    • How well the customer specifications are achieved by the system.
    • How much effort is put in developing the system.
    Reliability Testing

    Reliability testing is a type of testing to verify that software is capable of performing a failure- free operation for a specified period of time in a specified environment.

    Reliability means “yielding the same,” in other terms, the word “reliable” mean something is dependable and that it will give the same outcome every time.

    The same is true for Reliability testing. Reliability testing in software assures that the product is fault free and is reliable for its intended purpose.

    Example:- The probability that a PC in a store is up and running for eight hours without crashing is 99%; this is referred as reliability.

    Baseline testing
    • It is one of the type of non-functional testing.
    • It refers to the validation of documents and specifications on which test cases would be designed. The requirement specification validation is baseline testing.
    • Generally a baseline is defined as a line that forms the base for any construction or for measurement, comparisons or calculations.
    • Baseline testing also helps a great deal in solving most of the problems that are discovered. A majority of the issues are solved through baseline testing.
    Endurance Testing
    • Endurance testing is a non-functional type of software testing.
    • It is a type of non-functional testing.
    • It is also known as Soak testing.
    • Endurance testing involves testing a system with a significant load extended over a significant period of time, to discover how the system behaves under sustained use. For example, in software testing, a system may behave exactly as expected when tested for 1 hour but when the same system is tested for 3 hours, problems such as memory leaks cause the system to fail or behave randomly.
    • The goal is to discover how the system behaves under sustained use. That is, to ensure that the throughput and/or response times after some long period of sustained activity are as good or better than at the beginning of the test.
    • It is basically used to check the memory leaks
    Documentation Testing

    Documentation Testing involves testing of the documented artifacts that are usually developed before or during the testing of Software.

    Documentation for Software testing helps in estimating the testing effort required, test coverage, requirement tracking/tracing, etc. This section includes the description of some commonly used documented artifacts related to Software development and testing, such as:

    • Test Plan
    • Requirements
    • Test Cases
    • Traceability Matrix
    Recovery Testing
    • In software testing, recovery testing is the activity of testing how well an application is able to recover from crashes, hardware failures and other similar problems.
    • Recovery testing is done in order to check how fast and better the application can recover after it has gone through any type of crash or hardware failure etc. Recovery testing is the forced failure of the software in a variety of ways to verify that recovery is properly performed. For example, when an application is receiving data from a network, unplug the connecting cable. After some time, plug the cable back in and analyze the application’s ability to continue receiving data from the point at which the network connection got disappeared. Restart the system while a browser has a definite number of sessions and check whether the browser is able to recover all of them or not.
    Internationalization Testing

    Internationalization is a process of designing a software application so that it can be adapted to various languages and regions without any changes.

    In Other Words, Internationalization testing is the process of verifying the application under test to work uniformly across multiple regions and cultures.

    The main purpose of internationalization is to check if the code can handle all international support without breaking functionality that might cause data loss or data integrity issues. Globalization testing verifies if there is proper functionality of the product with any of the locale settings.

    Internationalization Checklists:

    • Testing to check if the product works across settings.
    • Verifying the installation using various settings.
    • Verify if the product works across language settings and currency settings.
    Localization Testing

    Localization is a process of adapting internationalized software for a specific region or language by adding local specific components and translating text.Localization testing is the software testing process for checking the localized version of a product for that particular culture or locale settings. The areas affected by localization testing are UI and content.

    Installation Testing

    Most software systems have installation procedures that are needed before they can be used for their main purpose. Testing these procedures to achieve an installed software system that may be used is known as installation testing. These procedure may involve full or partial upgrades, and install/uninstall processes.

    Installation testing may look for errors that occur in the installation process that affect the user’s perception and capability to use the installed software. There are many events that may affect the software installation and installation testing may test for proper installation whilst checking for a number of associated activities and events. Some examples include the following:

    • A user must select a variety of options.
    • Dependent files and libraries must be allocated, loaded or located.
    • Valid hardware configurations must be present.
    • Software systems may need connectivity to connect to other software systems

    Installation testing may also be considered as an activity-based approach to how to test something. For example, install the software in the various ways and on the various types of systems that it can be installed. Check which files are added or changed on disk. Does the installed software work? What happens when you uninstall?

    Configuration Testing

    Configuration testing is the method of testing an application with multiple combinations of software and hardware to find out the optimal configurations that the system can work without any flaws or bugs.

    Configuration testing is testing the performance of the system under development against various combinations of software and hardware to find out the best configurations under which the system can work with any flaws or issues while matching its functional requirements.

    Usually, Configuration Testing is a time-consuming process; since usually sets of software and hardware forming the system have many variables which result in a large number of combinations, also installing and uninstalling software and hardware can be a very time-
    consuming task. That’s why planning is usually an essential phase of Configuration Testing process.

    Inter System Testing

    Many a times, an application is hosted across locations; however, all data needs to be deployed over a central location. The process of testing the integration points for single application hosted at different locations and then ensuring correct data flow across each location is known as inter system testing.

    Data Volume Testing

    Data Volume testing is a Non-functional testing that is performed as part of performance testing where the software is subjected to a huge volume of data. It is also referred as flood testing.

    Volume testing is done to analyze the system performance by increasing the volume of data in the database.

    With the help of Volume testing, the impact on response time and system behavior can be studied when exposed to a high volume of data.

    For example, testing the a music site behavior when there are millions of user to download the song.

    Other Types of System Testing

    System Testing (ST) is a black box testing technique performed to evaluate the complete system the system’s compliance against specified requirements. In System testing, the functionalities of the system are tested from an end-to-end perspective.

    System Testing is usually carried out by a team that is independent of the development team in order to measure the quality of the system unbiased. It includes both functional and Non- Functional testing.

    • Availability Testing
    • Regression Testing
    • Mutation Testing
    • Progression Testing
    Let’s discuss some of the most common Other Types of System Testing:
    Availability Testing

    Availability Testing is a measure of how often any given software is actually on hand and accessible for use. In other words, it measures the probability that the software will run as required and when required.

    Availability Testing makes for a more efficient program. This efficiency is not just in terms of the software run time, but also in terms of its recovery and repair time. Hence, a software that has been through Availability Testing is bound to be more competent and more effective in functioning.

    Regression Testing

    Regression testing a black box testing technique that consists of re-executing those tests that are impacted by the code changes. These tests should be executed as often as possible throughout the software development life cycle

    Regression testing is a type of software testing that ensures that previously developed and tested software still performs the same way after it is changed or interfaced with other software. Changes may include software enhancements, patches, configuration changes, etc. During regression testing, new software bugs or regressions may be uncovered. Sometimes a software change-impact analysis is performed to determine which areas could be affected by the proposed changes. These areas may include functional and non-functional areas of the system.

    The purpose of regression testing is to ensure that changes such as those mentioned above have not introduced new faults. One of the main reasons for regression testing is to determine whether a change in one part of the software affects other parts of the software

    Types of Regression Tests:
    • Final Regression Tests: – A “final regression testing” is performed to validate the build that hasn’t changed for a period of time. This build is deployed or shipped to customers.
    • Regression Tests: – A normal regression testing is performed to verify if the build has NOT broken any other parts of the application by the recent code changes for defect fixing or for enhancement.
    Mutation Testing

    Mutation testing (or mutation analysis or program mutation) is used to design new software tests and evaluate the quality of existing software tests. Mutation testing involves modifying a program in small ways.Each mutated version is called a mutant and tests detect and reject mutants by causing the behavior of the original version to differ from the mutant. This is called killing the mutant. Test suites are measured by the percentage of mutants that they kill. New tests can be designed to kill additional mutants. Mutants are based on well-defined mutation operators that either mimic typical programming errors (such as using the wrong operator or variable name) or force the creation of valuable tests (such as dividing each expression by zero).

    The purpose is to help the tester develop effective tests or locate weaknesses in the test data used for the program or in sections of the code that are seldom or never accessed during execution. Mutation testing is a form of white-box testing.

    Progression Testing

    Progression Testing is nothing but, test the application with Old Test Data. Progression tests used in the current release would roll into regression tests for future releases. The purpose of this task is to run the system with existing test cases that were retained from the system tests.

    Progressive testing also known as incremental testing is used to test modules one after the other. When an application with a hierarchy such as parent-child module is being tested, the related modules would need to be tested first.

    This progressive approach testing method has three approaches:

    • Top-down Approach
    • Bottom-up Approach
    • Hybrid Approach
    5.4 Acceptance Testing

    Acceptance testing, a testing technique performed to determine whether or not the software system has met the requirement specifications. The main purpose of this test is to evaluate the system’s compliance with the business requirements and verify if it is has met the required criteria for delivery to end users.

    Acceptance testing is the most important phase of testing as this decides whether the client approves the application/software or not. It may involve functionality, usability, performance, and U.I of the application. It is also known as user acceptance testing (UAT), operational acceptance testing (OAT), and end-user testing.

    It is one of the final stages of the software’s testing cycle and often occurs before a client or customer accepts the new application. Acceptance tests are black-box system tests. Users of the system perform tests in line with what would occur in real-time scenarios and verify whether or not the software/application meets all specifications.

    There are various forms of acceptance testing:

    • User acceptance Testing
    • Business acceptance Testing
    • Alpha Testing
    • Beta Testing
    Acceptance Testing – In SDLC

    The following diagram explains the fitment of acceptance testing in the software development life cycle.

    The acceptance test cases are executed against the test data or using an acceptance test script and then the results are compared with the expected ones.

    There are twoimportant types of this testing.

    • Alpha Test
    • Beta Test
    5.4.1 Alpha Testing

    Alpha testing is a type of acceptance testing; performed to identify all possible issues/bugs before releasing the product to everyday users or public. The focus of this testing is to simulate real users by using Blackbox and white box techniques. The aim is to carry out the tasks that a typical user might perform. Alpha testing is carried out in a lab environment and usually the testers are internal employees of the organization. To put it as simple as possible, this kind of testing is called alpha only because it is done early on, near the end of the development of the software, and before beta testing.

    Advantages of Alpha Testing:
    • Provides better view about the reliability of the software at an early stage. Helps simulate real time user behavior and environment. Ability to provide early detection of errors with respect to design and functionality.
    • Alpha testing provides better insights about the software’s reliability and robustness at its early stages. You would even be able to detect many of the serious errors quite easily during the alpha test, because other minor design structures are yet to be integrated.
    • Alpha test acts as an effective software testing method.This testing ensures that the user will get high quality services in the form of complete functionalities and stability.
    • The most common requirement is that the software provided to the user should work properly according to the purpose for which it is created. Conducting alpha tests will reveal whether the software has all these necessary features to pass the strict quality standards set by customer
    5.4.2 Beta Testing

    Beta Testing of a product is performed by “real users” of the software application in a “real environment” and can be considered as a form of external User Acceptance Testing.

    Beta version of the software is released to a limited number of end-users of the product to obtain feedback on the product quality. Beta testing reduces product failure risks and provides increased quality of the product through customer validation.

    It is the final test before shipping a product to the customers. Direct feedback from customers is a major advantage of Beta Testing. This testing helps to tests the product in real time environment.

    In short, beta testing can be defined as– the testing carried out by real users in a real environment.

    Advantages Beta Testing
    • • Opportunity to get your application into the hands of users prior to releasing it to the general public. Beta testers can discover issues with your application that you may have not noticed, such as confusing application flow, and even crashes.
    • • Reduces product failure risk via customer validation.
    • • Improves product quality via customer feedback.
    • • Cost effective compared to similar data gathering methods.
    • • Creates goodwill with customers and increases customer satisfaction.
    Summary

    The following points summarizes the topic above:

    • There are mainly four testing levels are Unit Testing, Integration Testing, System Testing and Acceptance Testing.
    • Unit testing is performed by developers before the setup is handed over to the testing team to formally execute the test cases.
    • Integration testing means testing two or more integrated systems in order to ensure that the system works properly.
    • System testing is the first step in the Software Development Life Cycle, where the application is tested as a whole.
    • Smoke Tests can be manual or automated.
    • Acceptance testing is the most important phase of testing as this decides whether the client approves the application/software or not.

    Enrolled Course — API TESTING

    Copyright 1999- Ducat Creative, All rights reserved.