|This article needs additional citations for verification. (January 2012)|
Non-regression testing (NRT) is an approach to software testing. The purpose of non-regression testing is to verify whether, after introducing or updating a given software application, the change has had the intended effect. Contrast with regression testing which aims to show that the change has not had an unintended effect on the software.
The software development process can be divided in several steps, where each step culminates in a new software version, containing a number of new software features. This process continues until the final release when all the contents that satisfy the customer’s requirements have been included in the software. As the complexity of software architecture grows, the probability of introducing bugs increases. Bugs can occur after the software code has been modified for two main reasons:
- a new procedure is in conflict with a pre-existing procedure;
- a pre-existing procedure has been modified.
Usually, the occurrence of software bugs can result in unexpected delays to the project. Due to time-to-market restrictions, the validation phase of software functionalities must be well organized and efficient. In this context, non-regression testing provides a systematic procedure for fast and efficient validation and discovery of bugs within the software architecture.
How to perform a non-regression test
A non-regression test can be performed according the following steps:
- Define a benchmark software release;
- Define a set of routines able to stimulate as many software functionalities as possible;
- Launch these routines on both the benchmark and the new test release and acquire data which represents their behaviour;
- Analyse this data with a post-processing tool, able to provide statistic results;
- Report the outcome.
Exploratory testing is performed following similar steps, but it differs from NRT in its analysis and conclusions. NRT aims to check if software modifications result in undesired behaviour. Here, the new behaviour of the application is previously known, making possible the identification of an eventual regression (bug). Exploratory testing, on the other hand, seeks to find out how the software actually works, conciliating simultaneous testing and learning, and stimulating testers to create new test cases.[specify]
Regression and non-regression testing
The intent of regression testing is to assure that in the process of fixing a defect no existing functionality has been broken. Non-regression testing is performed to test that an intentional change has had the desired effect.
When a new software version is released without any new features relative to the previous version, i.e. the differences between the two versions are limited to bug fixes or software optimization, both the releases are expected to present the same functionalities. In this case, the tests applied to both versions are not expected to result in different behaviours, but only to assure that existing bugs have been fixed and no new bugs have been introduced. This testing methodology characterizes regression testing.
On the other hand, when the new release presents new functionalities or improvements that lead the software to behave in a different way, then the tests performed on the previous and new version can result in:
- desired differences, related to an expected new behaviour; and
- undesired differences, which indicate a software regression generally caused by a side-effect bug.
In this case non-regression testing is appropriate.
Who performs the non-regression testing
Once the customer has set all the requirements, the supplier will introduce all the contents, release by release, until the final release has been delivered. In this context, NRT can be performed by both the customer and the supplier.
It can be made by the supplier as a beta testing service to guarantee a higher quality product with a very low percentage probability of bugs. The client is equipped with a simulation environment that enables an easy way to perform routines and acquire data. In case of regression the supplier, owing the know-how, can quickly solve the problem and avoid releasing a malfunction software version to the customer.
||This article may be confusing or unclear to readers. (January 2012)|
On the other hand, NRT can be performed by the customer as an acceptance testing in order to prevent the final product from damages and eventually charge the supplier for the mismatch with requirements. Moreover, the customer, having a reduced know-how about the software structure, can perform the NRT in black-box testing and, after meeting a regression, refuse the new software release.
How to define a good non-regression testing strategy
Automated regression testing is not always possible, nor is it always economically viable in terms of maintenance costs. In the case of manual testing, the challenge is to identify the relevant tests to minimize the testing effort while maximizing the coverage of regression risks. To avoid to missing regression, the test strategy should be based on facts. To have these facts, the analysis of the application and the comparison of each version can help to identify all the changes and risks. The difficulty is to get a view of these risks that is usable for functional testing: beyond the modified file it is more important to assess the impact on existing functionalities.
To improve this analysis, a solution is to take the "footprint" of each test on the application, i.e. what is executed in the application by each test. This footprint is the link between code modules and functional test scenarios. Once this link is established, it is possible to know exactly what is covered by a particular test. Thus, when a new version has to be tested, it is possible to identify which tests will cover all the risks regression based on changes to the application. Defining an efficient strategy for regression testing becomes possible. With this method, test automation is not the only solution because the number of tests to play is reduced to the right cases.
NRT automotive applications
Throughout the years engine control unit (ECU) software requirements have become more complex and difficult to reach. This is due to the increaslingly stringent emission norms and the ambitious performance in terms of fuel consumption and power request. This, in turn, increases the demand and complexity of in-vehicle driving tests and diagnostic functionalities. As a consequence, along engine control systems development, each new software release results from a sequence of many others, each one introducing new functions seeking to satisfy, time after time, the demands. In this context, non-Regression testing is useful to verify that the performance and robustness of each software release does not decrease in relation with the previous one, or, in other terms, does not introduce regression.
NRT is applied during each software release testing phase, at the final stage during integration testing, right before the execution of system testing, and after the module testing (or unit testing) phase. In the module testing phase, single software modules are evaluated individually, which allows the identification of elementary errors like overflow, underflow, round-off, as well as discrepancies between algorithm model simulation results and the signals coming from the engine management system (EMS). The integration testing phase, performed afterwards, aims to verify whether the tested module is correctly integrated in the overall software system. Finally, functional testing (also called validation testing) is applied to validate the algorithms concerning functional requirements. This stage is usually performed after the calibration phase and characterizes an overall system testing, concluding the new software testing phase, and allowing, therefore, its release.
In automotive applications, non-regression testing is performed as follows:
- Selection of test manoeuvers and definition of engine parameters to be monitored;
- Execution of the selected manoeuvers on benchmark software and the software under test;
- Post-processing and analysis of data acquired during these tests.
The selected test manoeuvers must be able to stimulate as many algorithms implemented in the software as possible. Cold start, overshoot of rounds per minute, and ECE cycle (a standard manoeuver used to calibrate on-board diagnosis) are relevant examples. In addition, the engine parameters selected to be monitored must represent the engine global operating state along the manoeuvers executed, such as accelerator pedal deflection, engine speed, vehicle speed, engine temperature, and throttle body opening percentage. It is also necessary to monitor the mean variables of the control chain of air and torque estimation. All these diagnostic variables must be kept under control during the execution of the manoeuvers.
The tests are performed in simulation environments such as hardware-in-the-loop (HIL) simulators or Micro HIL (feed-forward systems that work as HIL downsized simulators), which support the drawing and execution of complex maneuvers usually very difficult to perform on a real engine or car (mainly because of time, cost and equipment restrictions).
Afterwards, a post-processing tool is required to process the acquired data, offering graphic analysis and statistical data, generally addressed to skilled personal able to identify possible regression on the software. This kind of tool can also be endowed with an automatic report generator, which gathers in a single document all the analysis results and conclusions from the comparison between the two software releases during the NRT.
- Kaner, Cem; Bach, James; Pettichord, Bret (2001). Lessons Learned in Software Testing. John Wiley & Sons. ISBN 0-471-08112-4.
- A. Palladino, G. Fiengo, F. Giovagnini, and D. Lanzo, "A Micro Hardware-In-the-Loop Test System", IEEE European Control Conference, 2009. Abstract (accessed on 01/10/10).
- S.Raman, N. Sivashankar, W. Milam, W. Stuart, and S. Nabi, "Design and Implementation of HIL Simulators for Powertrain Control System Software Development", Proceedings of the American Control Conference, 1999.