As project manager, I would like to convince higher management to invest more time for development of unit for existing methods. My understanding is that unit testing is foundation/pre-req for any meaningfull functionality/integration testing and investement into func/int. testing without reasonable coverage of unit test would not be wise.

Basically our situation is that after several years of development, we had nearly 0% of unit test (measured by how much code was covered by test).

2 months after forcing devs into unit testing, we have like 2% (unit test written only when new method is introduced or when any method is changed). With current rate of work, it would take ages to cover any reasonable % of methods with unit testing.

Is there any publication/graph/research/case study which I can show to them and demonstrate between probability of introducing hidden regression bug (ie, bug detected by end user and not by unit test) vs unit test coverage.

I know that unit test coverage is bad metric, however is currently best understandable by higher management and I’m in situation when there is almost no documentation covering all use-cases/required functionality where I could match integration/functional test against requirements.

Thank you for any hints/advices.



Source link https://sqa.stackexchange.com/questions/34415/mathematical-statistical-relationship-between-tests-and-

LEAVE A REPLY

Please enter your comment!
Please enter your name here