13

I saw an older post where someone had asked is it possible to get 100% code coverage, the responses were that it's real easy to get around 80% and then to cover the other 20% you're starting to get into edge cases and sometimes methods just cannot be tested.

I've been in orgs where there was 100% coverage, I've been in orgs where the coverage was in the high 80s, it didn't seem to make a major difference in how the org ran or acted.

Is it just pride to get 100% coverage, is it worth the time to close the loop on the edge cases to inch up to 100%?

What is the best practice in how a developer should spend their time?

Dan Wooding
  • 3,518
  • 7
  • 39
  • 84
  • 1
    You may find this helpful too: http://salesforce.stackexchange.com/questions/48067/is-it-possible-to-get-100-apex-code-coverage-all-the-time. Especially, the answer from sfdcfox. – Andy Hitchings Jun 14 '16 at 21:36
  • 3
    i realize the question is on coverage but in my book, a robust set of asserts that verifies your code does what it is supposed to do is worth a lot more of your invested time than just coverage – cropredy Jun 15 '16 at 02:04

4 Answers4

17

Yes, you should strive for 100%.

Actually, 100% is not a high enough bar. You should strive for 100% branch coverage. Do some reading on Cyclomatic Complexity to better understand.

Sometimes this type of coverage requires you to be more clever in how you design your code. One key strategy to achieve this goal is Separation Of Concerns.


Here is a simple example:

public static void complexOperation(List<SObject> records)
{
    for (SObject record : records)
    {
        if (someCondition)
        {
            // data transformation
        }
        else
        {
            // other transformation
        }
    }
    try
    {
        update records;
    }
    catch (Exception pokemon) // gotta catch em all!
    {
        // complex error handling
    }
}

Hmm, testing that is going to be quite difficult! SOC to the rescue.

public static List<SObject> filter1(List<SObject> records)
{
    List<SObject> filtered = new List<SObject>();
    for (SObject record : records)
        if (someCondition) filtered.add(record);
    return filtered;
}
public static List<SObject> filter2(List<SObject> records)
{
    List<SObject> filtered = new List<SObject>();
    for (SObject record : records)
        if (otherCondition) filtered.add(record);
    return filtered;
}
public static List<SObject> dataTransformation1(List<SObject> records)
{
    // data transformation
}
public static List<SObject> dataTransformation2(List<SObject> records)
{
    // other transformation
}
public static void safeUpdate(List<SObject> records)
{
    try
    {
        update records;
    }
    catch (DmlException dmx) // specificity yay!
    {
        // error handling
    }
}
public static void complexOperation(List<SObject> records)
{
    List<SObject> toUpdate = new List<SObject>();
    toUpdate.addAll(dataTransformation1(filter1(records));
    toUpdate.addAll(dataTransformation2(filter2(records));
    safeUpdate(toUpdate);
}

Every specific chunk of functionality above is much easier to test directly. Testing the composition of these can then be somewhat more cursory.

Adrian Larson
  • 149,971
  • 38
  • 239
  • 420
  • 2
    I think reality is messier - see e.g. Why Most Unit Testing is Waste. – Keith C Jun 14 '16 at 21:53
  • Most? That seems like quite a cynical view. Will read, but I won't be able to get to it for a bit. I admit I've strayed from a belief in TDD, but I have definitely had good unit tests protect me from fluky deployments, and insufficient ones really screw me. – Adrian Larson Jun 14 '16 at 22:00
  • It's cynical to the max, however the author had an interesting perspective on tests. – Dan Wooding Jun 22 '16 at 03:37
  • @KeithC Interesting read. I'm still going through it, but a big issue that keeps popping up in my mind is stated quite clearly at the top of section 1.4 "Programmers have a tacit belief that they can think more clearly (or guess better) when writing tests when writing code, or that somehow there is more information in a test than in code" – Nick C Aug 02 '16 at 13:51
  • @Nick To me it just reads like a jaded view of OOP. I don't like how the two concepts are treated as inextricable from each other. Also, especially on Salesforce, the test suite gives you an explicit contract of what the code must do. – Adrian Larson Aug 02 '16 at 13:54
  • @NickCook I linked to that in an effort to discourage Adrian from being quite so black and white in his answer. I do believe in the value of unit tests, but also think craftsmanship is required. Focussing on "100% coverage" over simplifies. – Keith C Aug 02 '16 at 14:06
  • My real feelings about the issue aren't as black and white as this post indicates, per se, but if you think I was focusing on the LOC coverage you missed the point I was trying to make. All I got from that article though is that the author is angry about unit tests. – Adrian Larson Aug 02 '16 at 14:08
  • 1
    To be honest, the main take away I had was that maintaining tests over time means they need to be associated with a business need. If the business needs change, which tests are no longer valid? Which tests need to be updated? How do we confidently make changes to both the test methods and the code to align to the new business requirements? There are a bunch of other things that I got out of it as well... Is this the right place to be discussing it though? If not, where? (I find it quite interesting) – Nick C Aug 02 '16 at 14:16
  • You're right about that. Here's a chat room for it. – Adrian Larson Aug 02 '16 at 14:18
10

Focussing on coverage in Apex is an example of what you measure is all you’ll get. Whether you aim for 75%, 80%, 90% or 100%, the number tells you next to nothing about whether the key behaviour of the code is being both exercised and confirmed. Developers should start with that aim in mind and the coverage will happen pretty much automatically.

Keith C
  • 135,775
  • 26
  • 201
  • 437
9

100% coverage may not necessarily be possible, but you should be able to get close in most cases. I usually set the bar at 95%, because sometimes you just can't get all the way.

Typically, when I start from scratch, I'll usually write a test. If I'm at 100%, I'm done. Otherwise, I'll refactor branches to minimize uncovered areas. If I'm at 100% after an initial refactor, I'm done. Finally, I write unit tests to trip as many exception paths as I can. Thanks to @TestSetup, it's now a lot easier to reach 100%, but there are something things you simply can not cover.

They're impossible, because the language has no way to test them. For example, anything to do with row lock handling is impossible to reliably test, because you can't simulate a row being locked as a separate transaction.

What you must cover are all the primary paths through your code. Validate that they work correctly. The primary paths should be at least 75% of your code, because that will allow you to satisfy both the 75% minimum requirement, as well as the philosophy that you should verify the primary paths work correctly.

Using the initial unit test to gauge refactoring lets me not worry about unoptimized code ahead of time. The unit test will tell me what I did wrong. You shouldn't usually spend more time writing your tests than you did writing the initial code (e.g. 50% of your development time should be developing/fixing bugs).

Any more than that, and you'll start to get testing fatigue, as I like to call it. Each additional test beyond the first is going to yield smaller and smaller returns, to the point where you'll be writing 25 lines of code just to cover another 1 line of code, for example. If you're at 75%, you've covered all your branches, you've refactored, and you've reached a point where 1% takes more effort than the first 75%, that's usually the time to give up, or at least do it in phases.

sfdcfox
  • 489,769
  • 21
  • 458
  • 806
9

100% is a nice aspiration. However, it may be a long road to get there (and a tough sell to management/the project to concentrate purely on coverage).

A more important objective (assuming you already have a codebase) is to have increasing test coverage. This can be achieved "cheaply":

  • every bug fixed: add tests for it.
  • every feature added: add tests for it.

That way as the codebase churns, coverage will tend towards 100%.

Remember having 100% coverage (even 100% branch coverage) does not mean you have no bugs.

hayd
  • 191
  • 1
  • 6
  • 3
    Good point in your last paragraph there. No amount of testing can guarantee bug free code. +1 – Adrian Larson Jun 15 '16 at 05:27
  • Exactly, many times you won't have time to coverage code above 75%, anyway in org production, it's a good practise to not fall below 85% of code coverage. – DarkSkull Jun 15 '16 at 05:42