To play devil's advocate - are there any (useful) measures of software quality? Even this place is mostly programmers and we can't even agree whether we should be writing unit tests or not.
Sort of. There are accurate measures with verifiable predictive power. But useful depends on cost/benefit, which in turn depends on ability to implement and market forces.
There's a company that looked at reducing critical defects from a sort of actuarial perspective. They have a few decades of cross-industry data. I've used their model, and it works. If you don't need a numerical result, you can just read the white paper about what's most important [1].
So to partially answer your question: unit testing reduces defects, but reducing defects might not be worth the costs to you.
And defects might not be the only thing that matters. There are other measures of goodness, like maintainability, which complicates the answer. You'd have to collect your own data for that.
Iād say for micro services and large distributed system, you do need a pyramid of testing with most covered at the unit level. The system is just too large and continuously changing as all the different versions of services release.
To play devil's advocate - are there any (useful) measures of software quality? Even this place is mostly programmers and we can't even agree whether we should be writing unit tests or not.