The code has changed… huh?
Developers and testers don’t speak the same language. This strongly reduces test teams' efficiency!
Developers talk about files, classes, or code; testers talk about test cases, business needs, and functional scenarios. In other words, communication isn’t working.
The consequence of this misunderstanding, just like we talked about in the post concerning test efficiency improvement, is the blindness of test teams.
The content of received versions is a mystery, just like changes and impacts. No way to direct the test strategy toward risks. Or with extremely low confidence.
This communication problem is perfectly pictured by the trials of developers to make changes clearer: testers don’t get a thing! Why?
Release notes aim to point out the modifications. But in fact, the content is too technical for testers, sometimes too general or even declarative. The usefulness of these notes is then more than questionable…
The question is: what are the impacts of these misunderstandings?
- Hardly assessed regression risks
- Difficulty in selecting the appropriate test scenarios
- It is impossible to make sure the regression risks are well-covered
In short, misunderstanding between developers and testers is a major source of test process inefficiency. And this generates regressions that only surface when in production.
What’s up, doc?
Information with every delivery is a good start…
During the validation stage of a version, regression risk is increased with every new delivery (estimated to be 8% of bugs found in production).
Executing 100% of test cases at each delivery rapidly appears as unrealizable. The shortest is the given time; the narrowest is the scenario selection, and the more are missed regressions.
In order to avoid this infernal cycle, testers must get a clear vision of changes and their impacts for every delivery.
Trustable information is much better!
For test teams to be able to rely on this information, they must be trustworthy. The discussion we have with our clients shows that when the information is given on written support by development teams, trust is lowered.
It seems a specific tool is needed to compare the received version to the previous one on the overall features (code, setup, third-party libraries, etc). This would allow testers to reach the high trust level they are looking for.
I have a dream… to get relevant information!
Having information is essential, but this information must be usable by test teams!
Dealt functional requirements, dealt with User Stories… These data can’t be missing for functional validation. However, they are hardly usable by the test manager for defining the regression test strategy.
Moreover, it’s rarely comprehensive! New changes would have been made. With no record of functional requirements or anomalies most of the time.
Exploratory tests will try to catch related risks, but often with a very low efficiency.
To improve this matter, our clients helped us define 2 levels of useful information:
- Business view, for knowing changes and their impacts. Returning information through impacted sub-sets of the software will allow targeting test efforts toward the highest regression risks.
- Impact on scenarios, to identify scenarios to replay. Identifying the footprint (executed code during a test case) of every test on the software’s code will point out test cases impacted by a change.
What if you could “Test right”?
If this difficulty in communicating between dev and test teams could fade away, what could become of tests?
This is what we call “Right testing”. An open door to a new strategy that tends to solve this equation:
To more efficient tests and beyond!
This series on test efficiency improvement will go on in two coming articles:
- How to exploit test coverage to master and anticipate risks?
- How to define a more efficient non-regression test strategy?
We will also talk in a soon article about the reverse communication issue: when developers don’t receive the right level of information from testers.
Thus, how to analyze, diagnose, and correct an anomaly? A classic problem that feeds bug trackers of “unreproduced” bugs. It is also the reason why the number of deliveries needed to correct an issue is increasing.