98

I just graduated with a degree in CS and I currently have a job as a Junior .NET Developer (C#, ASP.NET, and web forms). Back when I was still in university, the subject of unit testing did get covered but I never really saw the benefits of it. I understand what it's supposed to do, namely, determine whether or not a block of code is fit for use. However, i've never actually had to write a unit test before, nor did I ever feel the need to.

As I already mentioned, I'm usually developing with ASP.NET web forms, and recently I've been thinking of writing some unit tests. But I've got a few questions about this.

I've read that unit testing often happens through writing "mocks". While I understand this concept, I can't seem to figure out how I'm supposed to write mocks for websites that are completely dynamic, and where almost everything depends on data that comes from a database. For example: I use lot's of repeaters who have ItemDataBound events etc. (again depending on data that is "unknown").

So question number 1: Is writing unit tests for ASP.NET web forms something that is done often, and if it is: how do I resolve the "dynamic environment" issue?

When I'm developing I go through a lot of trial-and-error. That doesn't mean I don't know what I'm doing, but I mean that I usually write some code, hit Ctrl F5 and see what happens. While this approach does the job most of the time, sometimes I get the feeling that I'm being a little clueless (because of my little experience). I sometimes waste a lot of time like this as well.

So, question number 2: Would you guys advise me to start writing unit tests? I think it might help me in the actual implementation, but then again I feel like it might slow me down.

Jane Doe
  • 457
  • 1
    I would like to just say the other side of this is some places require them as practice. I am at a place now that we are not allowed to check code in unless it is covered under unit tests. So it is a point to note that even if you may not believe in them, you may be required to do them down the road. I think it is a useful skill to learn and have in your arsenal. I many times will find little bugs and mishaps in my code from doing the tests, almost a little QA session with yourself. – Adam Jul 24 '12 at 22:12
  • As others have noted you can do "dum unit testing" - lots of tests for various cases for that particular module. If you do "design driven testing" i.e. derive test cases "from design" (rather than design from test cases as in TDD) you'll actually write "less test cases" but those that are more meaningful. It's not necessary to write 20 test-cases for 2 methods, when 5 of them will never occur in the system by design. Or to write tests that cover more than one module, saving duplication. – PhD Jul 24 '12 at 23:33
  • Cont'd: Have a look at Design Driven Testing - http://www.amazon.com/Design-Driven-Testing-Smarter-Harder/dp/1430229438/ref=sr_1_1?ie=UTF8&qid=1343172849&sr=8-1&keywords=design+driven+testing – PhD Jul 24 '12 at 23:34
  • 9
    How exactly did you cover unit testing without writing any unit tests? Was this one of those "grade yourself" or "design your own curriculum" schools? – JeffO Jul 24 '12 at 23:52
  • agaisnt legacy systems, for example a very old stored procedure that take 12 argumenst a you want to migrated to nhibernate, that things save the live – osdamv Jul 24 '12 at 23:59
  • 4
    Unit tests are questionable--Great for dynamic langauges, less useful the more strict your language is, but DO BUILD TESTS. They don't have to be unit tests, but you really should have integration tests you can run with JUnit, those are very useful. – Bill K Jul 25 '12 at 00:47
  • 13
    -1 Sorry, I'm all for good arguments, even if I do not agree, but this is just wrong. No one claims "unit tests are for catching bugs in new code", so that's a straw man - unit tests are for catching regressions. And the Dijkstra quote is taken out of context - the context is that testing is pointless if you have a formal specification of your problem. – sleske Jul 25 '12 at 10:27
  • 1
    I would suggest visiting DimeCasts.Net and watching their video series on Unit Tests and Mocking. You will learn a lot by seeing code in action. – Alan Barber Jul 25 '12 at 12:32
  • 9
    "unit tests are for catching regressions". No. Automated tests are for catching regressions. Regressions invariably require the same tests to be run many hundreds of times so it is worth the effort to automate them. Unfortunately many of the responses and comments on this question are really dealing with the issue "Are automated tests that helpful?". Unit tests may be a form of automated test but they have a completely different focus. I certainly consider automated tests to be worth their weight in gold, but that shouldn't be used as an argument to justify unit tests (or TDD for that matter). – njr101 Jul 25 '12 at 15:38
  • @JeffO In my opinion, there's a difference between covering unit testing at University in an academic environment to covering unit testing in a production environment. In university the Software engineering courses will teach Unit testing, but to really grasp the usefulness requires experience. – David van Dugteren Jul 26 '12 at 00:13
  • Yes it will slow you down. It can consume up to 1/3 of your development time. So you should decide what to test and what not to test. – nubm Jul 26 '12 at 08:59
  • 1
    @nubm: According to case studies at Microsoft and IBM, unit tests actually speed up your total development time. http://stackoverflow.com/questions/3756796/how-much-additional-time-does-unit-testing-take – André Paramés Jul 26 '12 at 09:36
  • An interesting related read http://googleresearch.blogspot.in/2006/06/extra-extra-read-all-about-it-nearly.html – bobby Jul 26 '12 at 10:21
  • @André I consider TDD as positive especially in the context of the whole development cycle. But when the developer is at the start of a new project he should take into account it will cost more time. It will hit his immediate productivity and it will cost his or his employer money. – nubm Jul 26 '12 at 13:55
  • 1
    @AndréParamés I wouldn't look to Microsoft or IBM for best coding practices or case studies that will be relevant to sane development teams. Well, certainly not in a general department unspecific sense anyway. – Erik Reppen Jul 26 '12 at 23:38
  • @ErikReppen: care to expand? Are they worse than the average development team? (I frankly don't know). – André Paramés Jul 27 '12 at 10:12
  • @nubm: Well, that's not what the case studies have found. – André Paramés Jul 27 '12 at 10:16
  • Well, MS is legendary for code bloat and quantity over quality approaches to issues. Everything I've seen from IBM that's been web-related, has been horrifying. I think MS is shaping up and certain departments have their stuff together, but I would definitely look to case studies at places that most closely resemble your organization. IBM has been living off of mostly-unwarranted IT-brand-loyalty for decades now. – Erik Reppen Jul 27 '12 at 18:58
  • @AndréParamés You obviously haven't read that study at all. "The increase in development time ranges from 15% to 35%. From an efficacy perspective this increase in development time is offset by the reduced maintenance costs due to the improvement in quality (Erdogmus and Williams 2003), an observation that was backed up the product teams at Microsoft and IBM." – nubm Jul 29 '12 at 12:03

18 Answers18

124

In my opinion: yes they are, and yes you should.

  1. They give you confidence in the changes you make (everything else is still working). This confidence is what you need to mold the code, otherwise you might be afraid to change things.

  2. They make your code better; most simple mistakes are caught early with the unit tests. Catching bugs early and fixing them is always cheaper than fixing them later, e.g., when the application is in production.

  3. They serve as documentation for other developers on how your code works and how to use it.

The first problem you face is that ASP.NET in itself does not help you to write unit tests—actually it works against you. If you have any choice, start using ASP.NET MVC, which was created with unit testing in mind. If you can't use ASP.NET MVC, you should use the MVP pattern in ASP.NET so at least you can unit test your logic easily.

Besides that you just need to get proficient in writing unit tests. If you practice TDD, your code is created testable—in other words, nice and clean.

I would advise you to practice, and pair program. While reading:

Or, for a first overview:

Jon Purdy
  • 20,547
KeesDijk
  • 8,948
  • 4
    Having just started in unit testing I would have to agree. Not only is it a good habit and very usful combined with a TDD approach, but it minimizes so much time. I don't think my projects could be as useful if I was not able to just run a unit test and verify that everything is working properly even after adding in a new feature or fixing a bug. I can't think of doing regression testing any other way. – kwelch Jul 24 '12 at 19:51
  • 21
    If unit-tests are working as a documentation, there is something wrong. Reading 500 lines of code to understand how 5 lines of code work is backwards. – Coder Jul 24 '12 at 21:00
  • 4
    @Coder: when you test higher level methods, it does involve lot more than 5 lines of code. –  Jul 24 '12 at 21:11
  • 7
    @coder: documentation of a class tells you the services that class' instances provide. It tells you less information on how this class' instances are used in the larger context, that is, the interaction between objects. Tests give you typical code of interaction in some cases, which has been precious as a starting point so many times I can't even count them. – Stefano Borini Jul 24 '12 at 21:14
  • 3
    @Coder: I would say a unit test is another kinds of technical document and with different purpose. A well written unit test not only helps other programmers to understand what that code dose, but also how to use that code. Programmer tend to learn faster when there is an example. – Rangi Lin Jul 25 '12 at 00:49
  • 21
    @Coder: It's not documenting what the code does, it's documenting the assumptions inherent in that code, i.e. why and when it's supposed to work. If the underlying assumptions are invalidated by a change to the SUT, one or more unit tests should fail. This doesn't replace higher-level design/architecture documentation, or even XML docs, but it covers what those things never can. – Aaronaught Jul 25 '12 at 00:50
  • "If you practice TDD, your code is created testable—in other words, nice and clean". Ha. Testable and "nice and clean" are very different things. Unit tests interact with the outer interface of your code, which can still be an ugly mess inside. So no, TDD and testable code does not mean "nice and clean". – CesarGon Jul 25 '12 at 19:32
  • It's a cost / benefit situation. Tests aren't free, they can take a significant % of your development time to create - and the benefit is you get a smaller % of bugs. What's more valuable to you - the time, or the reduced bug count? – niico Jul 24 '17 at 22:59
94

No.

The concept behind unit tests is based on a premise that has been known to be false since before unit testing was ever invented: the idea that tests can prove that your code is correct.

Having lots of tests that all pass proves one thing and one thing only: that you have lots of tests which all pass. It does not prove that what the tests are testing matches the spec. It does not prove that your code is free from errors that you never considered when you wrote the tests. (And the things that you thought to test were the possible issues you were focusing on, so you're likely to have gotten them right anyway!) And last but not least, it does not prove that the tests, which are code themselves, are free from bugs. (Follow that last one to its logical conclusion and you end up with turtles all the way down.)

Djikstra trashed the concept of tests-as-proof-of-correctness way back in 1988, and what he wrote remains just as valid today:

It is now two decades since it was pointed out that program testing may convincingly demonstrate the presence of bugs, but can never demonstrate their absence. After quoting this well-publicized remark devoutly, the software engineer returns to the order of the day and continues to refine his testing strategies, just like the alchemist of yore, who continued to refine his chrysocosmic purifications.

The other problem with unit testing is that it creates tight coupling between your code and the test suite. When you change code, you'd expect some bugs to show up that would break some tests. But if you're changing code because the requirements themselves have changed, you'll get a lot of failing tests, and you'll have to manually go over each one and decide whether or not the test is still valid. (And it's also possible, though less common, that an existing test that should be invalid will still pass because you forgot to change something that needed to be changed.)

Unit testing is just the latest in a long line of development fads that promise to make it easier to write working code without actually being a good programmer. None of them have ever managed to deliver on their promise, and neither does this one. There is simply no shortcut for actually knowing how to write working code.

There are some reports of automated testing being genuinely useful in cases where stability and reliability are of paramount importance. For example, the SQLite database project. But what it takes to achieve their level of reliability is highly uneconomical for most projects: a test-to-actual-SQLite-code ratio of almost 1200:1. Most projects can't afford that, and don't need it anyway.

Mason Wheeler
  • 82,789
  • 69
    It proves that your code behaves correctly for those tests. It's a freaking lot if you ask me. – Stefano Borini Jul 24 '12 at 19:35
  • 9
    @Stefano: It proves nothing whatsoever to do with correctness. It can only prove that the code behaves as the tests expect. The test could still have a bug, and a buggy test is worse than no test at all, as it will give you a reason not to look at the part of the code covered by that test. – Mason Wheeler Jul 24 '12 at 19:37
  • 14
    Dijkstra is a computer scientist and think about "correctness" in terms of algorithmic nomenclature. Programming is much more than that. Programming is also being sure that if I move my code from OSX 10.6 to 10.7 it still works as expected. I want a test that tells me that. Tests railroad your code into something that, as soon as it steps out of that rails, it raises an alarm. – Stefano Borini Jul 24 '12 at 19:39
  • 2
    Agreed that you cannot generally 'test' correctness into your code. And there's no substitute for writing working code, being clear on your intentions. But how far are you going with your advice? Are you advocating no testing? Are you advocating provable correctness? – joshp Jul 24 '12 at 19:40
  • 6
    Martin Fowler describes the use of JUnit in his Refactoring, published in 2000. If significant unit testing frameworks have been widely used for over 12 years, they hardly constitute a "fad." – Matthew Flynn Jul 24 '12 at 19:47
  • 13
    @Ryathal indeed, altho i appreciate the positive answers people are giving, i dont think someone expressing his opinion in a well argumented manner should get downvoted just because it differs from the general opinion. so Mason, thanks for the reply. – Jane Doe Jul 24 '12 at 19:59
  • 111
    Who today actually believes unit tests are "proof-of-correctness"? Nobody should think that. You are correct that they only prove that those tests pass, but that's one more data point than you had before writing the unit tests. You need testing at many different layers to give yourself insight into the quality of your code. Unit tests don't prove your code is free from defects, but they do raise your confidence (or should...) that the code does what you designed it to do, and continues to do tomorrow what it does today. – Bryan Oakley Jul 24 '12 at 20:09
  • 7
    @Bryan: That's the problem. They raise your confidence, but if the test itself is buggy, then you have false confidence, which is worse than no confidence. And they make your code harder to adapt to changing requirements, which happens all the time in real-world coding, for the reasons I explained. – Mason Wheeler Jul 24 '12 at 20:26
  • 7
    @Joshp: I'm advocating the only thing that actually does work: manual testing. Have people use the software in real-world conditions, and you will find the real-world bugs that people are most likely to run across in actual usage, and it will not require writing 1200X your codebase worth of tests. The codebase I work on at work is around 4 million lines of code. Can you even imagine my boss's reaction if someone told him we needed to write 48 billion lines of tests in order to have confidence in our product?!? – Mason Wheeler Jul 24 '12 at 20:28
  • 40

    but if the test itself is buggy, then you have false confidence and if the manual tester does not perform its job correctly, you also have false confidence.

    – Stefano Borini Jul 24 '12 at 20:31
  • 33
    @MasonWheeler: sorry, Mason, I'm not convinced. In more than a couple decades of programming I don't think I've ever personally seen a case where a test gave a false sense of security. Maybe some of them did and we fixed them, but in no way did the cost of those unit tests outweigh the enormous benefits of having them. If you're able to write code without testing it at the unit level, I'm impressed, but the vast, vast majority of programmers out there are unable to do that consistently. – Bryan Oakley Jul 24 '12 at 20:32
  • 3
    @Bryan: I also unfortunately program without tests (not completely true, we do have some units and full functional tests) and I agree with one thing Mason says: it's a problem of management. If the management rejects the idea of having tested code, and if the developers are not coordinated by an architect that enforces a style and layout that is testable, reducing the whole thing to a free-for-all, you will not get any benefit. but that's because the whole bunch is doing it wrong, and the root of the problem is somewhere else, not in TDD. – Stefano Borini Jul 24 '12 at 20:34
  • 41
    If you wrote a buggy test that your code passed, your code is likely buggy too. The odds of writing buggy code and a buggy test is a whole lot less than buggy code alone. Unit tests are not a panacea. They are a good extra layer (where prudent) to reduce the burden on manual testers. – Telastyn Jul 24 '12 at 20:43
  • 5
    There is one more problem, add multiple threads of execution and all unit-tests are rendered useless. – Coder Jul 24 '12 at 20:54
  • 7
    @Mason: what I have seen with manual testing is that the testing is so expensive that working code cannot be changes. Instead the projects add functionality by cut-and-paste until no more features can be added. – kevin cline Jul 24 '12 at 22:04
  • 6
    @Kevin: I'm not sure what happened there, but someone was clearly doing something wrong. We have manual testing where I work. Our QA team is quite effective at making sure things stay on-spec and at catching edge cases that the programmers forgot to account for, and they cost a lot less than the developers. We make changes to working code all the time when new requirements make them necessary, and then we run it through another QA cycle. And it works, well enough that the users love our software and the companies they work for have made it into the #1 product in our field. – Mason Wheeler Jul 24 '12 at 22:25
  • 19
    There's a world of difference between claiming that unit tests somehow "prove" that your program is correct (which is clearly nonsense) and claiming that they are still useful (which to my mind is very much not nonsense). If you habitually write non-working code, then the fact that you also write (quite possibly equally non-working) unit tests will not save you. That doesn't mean that tests are not a useful addition to code that is already good. You don't use unit tests to try and prove correctness - you use them as a useful sanity check, focusing on the boundary cases. – Stuart Golodetz Jul 24 '12 at 22:46
  • 3
    @Stuart: I can see the appeal in that, but the numbers don't add up. If it was a minor expenditure, that would be one thing. But as the SQLite figures show, to cover everything you're looking at a ratio of 3 orders of magnitude more tests than code. And even if you're not going into that great of depth, it still takes several times more code in tests than code in the actual product being tested to even approach a useful level of coverage. That's a lot of extra code to write, debug, test, and maintain that could be spent on the actual product. – Mason Wheeler Jul 24 '12 at 22:52
  • 3
    +1, not necessarily because I agree with all points, but because this is a well-reasoned rebuttal. Note that many of the SQLite tests are not actual C code, but TCL scripts and SQL statements. The actual C code is about 650.5 KSLOC, a small fraction of the total line count of 91392.4 KSLOC. – Robert Harvey Jul 24 '12 at 23:01
  • 4
    @Mason: I entirely agree that trying to cover everything is crazy - I've tried it in the past and it simply doesn't work well. That doesn't mean that you can't cover the important things that you're worried might not work. When you focus on the boundary inputs to your functions, you're homing in on the things you think are risky. You can apply the same principle to your codebase as a whole and test the modules that you think are risky too. I've also tried this, and it's worked a lot better. I don't think it proves my code correct, but it lets me try my code out in isolation. – Stuart Golodetz Jul 24 '12 at 23:06
  • 15
    @Mason, Whatever unit tests you've seen that were "1200 times as heavy" as the code under test are silly in the extreme, and not a good example of unit tests. – Wayne Conrad Jul 25 '12 at 00:06
  • 5
    +1, I'd honestly like to see some sample code that is well written, properly indented, doesn't use "clever code" where the intent is not obvious, uses good naming conventions, follows Single Responsibility Principles, not chock full of useless comments, etc. e.g. Quality Code. Where a Unit test can show that it would be significantly helpful when the code changes due to changing business requirements. My feeling has always been if you are writing "Clean Code" from day 1 you won't need tests and they may actually make progress more difficult as they limit your flexibility. – scunliffe Jul 25 '12 at 00:21
  • 1
    Please take this discussion to the chat room. Comments are not meant for extended discussions. – maple_shaft Jul 25 '12 at 02:52
  • 1
    Please a) incorporate any useful information in the answer and b) take the conversation to [chat]. I will be deleting all the comments in the morning. – ChrisF Jul 26 '12 at 22:09
  • 1
    @Wayne: SQLITE unit tests are many times (perhaps not 1200x) larger than the code itself, and it is the unit tests of SQLITE that allow people to use it and trust it. I believe Mason is saying that (a) such expense is justifiable in the case of SQLITE, and (b) that most projects are not similar to SQLITE in any way. – Warren P Feb 04 '13 at 20:35
  • @Warren, I agree with that statement. An important measure of risk is how many external agents depend upon your code. As that goes up, perhaps you ought to bear a heavier burden of tests. – Wayne Conrad Feb 05 '13 at 01:27
  • So, what is your alternative to automated tests? Use a formal theorem prover to demonstrate that the code is correct? Or prove it manually, and never change it again since the cost to prove it again is too high? Don't do anything and hope nothing breaks? – Paŭlo Ebermann Feb 06 '13 at 19:52
  • 1
    @Paulo: Look up. I've already explained my alternative, and it works well in actual practice. – Mason Wheeler Feb 06 '13 at 19:54
  • 2
    Ah, you mean "knowing to write working code"? It looks like you only have a quite small circle of potential developers. Or do you mean "manual testing"? I think this is not an alternative to unit testing, they complete each other. If your manual testers can test all features of your 4 millions of lines of code for each release, you either don't release often or have quite few features per line. I'm happy for every automatic test that shows a mistake before the QA team has to catch ist (or even misses it), even if those tests don't catch everything. – Paŭlo Ebermann Feb 06 '13 at 20:17
  • 2
    -1 as this proves nothing, and also leaves a false feeling of security in the absence of unit-tests ;) – mlvljr Aug 07 '13 at 09:35
  • 1
    @StuartGolodetz "I entirely agree that trying to cover everything is crazy". You're not getting it, the point made here is that to have a meaningful test you HAVE TO triangulate it to a certain extent. Testing that A => B, requires more than writing a unit test showing that A => B. – tobiak777 Jan 09 '16 at 14:21
  • 1
    I agree with this answer. At the same time, I use unit tests for three things : 1) When it is faster to write a test than debug (often) 2) For complex multi-level pieces of logic 3) Before modifying stable code (regression). If you are SOLID (Open / Close) it tends to be unnecessary – tobiak777 Jan 09 '16 at 14:22
  • I'm puzzled by the zero-sum nature of this answer. A good system will have many types of testing. Unit tests test basic code logic for correctness; integration tests test parts of the system together; acceptance tests protect against regressions of software features; manual testers test boundary conditions and edge cases and look for things missed in the automated tests. Bugs found by the manual testers are codified in unit, integration, and/or acceptance tests to guard against future regressions. All of these testing types work together, not against each other. – Kyralessa May 31 '16 at 20:57
  • It proves that it is correct from two points of views: the original developer and the unit test writer. Even if both of them are the same person it is still two different points of view, that is much better than none.
  • – zzz777 Jun 27 '16 at 20:54
  • Even though I strongly disagree, I gave a well deserved upvote because I'm tired of seeing people treating the 2 arrows as "I agree" or "I disagree" instead of treating them as what they really are: "this answer is nicely worded, structured and helpful" or .... "it's not". – Radu Murzea Jul 04 '16 at 21:01