On one hundred percent code coverage - how to approach coverage and void misusing it
The mindset for testing for developers has changed over the years as the software industry is evolving. Even though, testing practice is not widely used, it is integrated in the development cycle of the application. As such, code coverage became a popular subject among developers and discussions have been risen in order to agree on a consensus of code coverage.
This post aims to give this discussion a push and share what I think about code coverage, what I see teams doing and what I understand that is effective or not.
Test State Coverage, Not Code Coverage The Pragmatic programmer - 20th anniversary edition, pg 278
Test driven development
Test driven development has been (TDD) adopted for developers in order to achieve high quality software as well as to keep it evolving over time and avoid the fear of change. Therefore,  and  describes as TDD being a three stage flow (red-green-blue), is not what I identify in the projects I have worked on.
Most of those projects were using a combination of ITL(interactive test last) or not testing at all, the test was given to a QA (Quality assurance) professional. In this scenario, the team wasn’t getting any value at the time to keep the tests up to date or even to write them. This is a entire discussion that I will not approach here, though, it is what possibly leads to miss leading metrics. As often management try to force developers to reach 100% coverage just for the sake of it, or because they saw they could use that to force some kind of behavior on developers. James Carr catalog named this anti-pattern as “The Liar”, which Dave Farley uses as a base line to discuss the subject on his video  () also mention the coverage goal being a misleading understanding on his video about Behavior Driven Development).
TDD is a safe net for developers to keep improving the code, communicate intention and also a culture to follow. As Dave Farley says on his video: practice TDD and avoid the liar trap.
Quality gates are used to enforce some minimum rules during the software development life cycle. Among different rules we can list:
- Code linting
- Test suite
- Security checks
- Performance budget
Many would argue that code coverage would have its place, which I agree. We could use code coverage as a gate to not allow code into production, if the code base has less than X percentage we fail the release process. Therefore, besides being a quality gate, it should be a indication of the test suite health.
The team, should trust the test suite and the coverage should reflect the health of the suite. Which, in many cases this is not what happens. The common approach is to target X percent of coverage no matter what.
Avoiding wrong metrics
My experience tells that for many developers tests are a matter of needed, kind of obligation. They write the tests not because they want to be proud of the work they do, or because they want to give the next developer (that will maintain that in the future) a hint on what the code was built to support or not.
As such managers, try to enforce the idea that forcing developers to reach X percentage of coverage will increase the code base quality.  recorded a 3 series video going through the famous Gilded Rosa kata, which the goal is to refactor the code given. Around the minute 15:59 of the first video she depicts an issue in the tests that she had. Even though she changed a critical part of the production code, the tests were passing. The code had one hundred percent coverage. It was not giving the desired feedback.
Of course it was a kata and she beautifully depicted this problem as she goes through the code. Still, I see developers making they proud of themselves because they have X percentage of coverage - A quick navigation between developers linkedin profile shows that code coverage, is something that developers are proud of.
This video series alone points how useless metric of X percentage coverage is. Code coverage should be a side effect of the test suite quality , which software developers can rely on and understand that tests are a safe net for continuous improvement of the code.
Following with that idea of code coverage is the wrong metric to follow (as it is),  indicates that there is no tangible help in producing defect-free software when chasing coverage, even though, in the related work section, there is one paper out of 8 that describes some relationship between those factors. The following quote is a reproduction of the conclusions given by the paper:
Decision coverage, statement coverage, and function coverage are popular measures that purport to indicate test sufficiency. However, 100% coverage does not entail 100% tested code.
On the other hand, code coverage can be used as a guide when it needs to be, Maurício Aniche depicts elaborates on this idea and uses google references for code coverage.
- K. Beck, TDD by example. Addison-Wesley Professional, 2000.
- M. Fowler, “TestDrivenDevelopment,” 2005 [Online]. Available at: https://martinfowler.com/bliki/TestDrivenDevelopment.html. [Accessed: 31-May-2021]
- D. Farley, “When Test Driven Development Goes Wrong,” 2021 [Online]. Available at: https://youtu.be/UWtEVKVPBQ0?t=243. [Accessed: 05-Jun-2021]
- D. Farley, “BDD Explained (Behaviour Driven Development,” 2021 [Online]. Available at: https://www.youtube.com/watch?v=zYj70EsD7uI. [Accessed: 27-May-2020]
- E. Bache, “Introducing the Gilded Rose kata and writing test cases using Approval Tests,” 2018 [Online]. Available at: https://www.youtube.com/playlist?list=PLuvRKxeqrv4K-rn0zxHPNiXOWBkP9ZZIH. [Accessed: 31-May-2021]
- V. Antinyan, J. Derehag, A. Sandberg, and M. Staron, “Mythical Unit Test Coverage,” 2018 [Online]. Available at: https://www.researchgate.net/profile/Vard-Antinyan/publication/324959836_Mythical_Unit_Test_Coverage/links/5e5934a692851cefa1cd5869/Mythical-Unit-Test-Coverage.pdf. [Accessed: 05-Nov-2021]