Friday, January 25, 2008

Why TDD is bad

TDD (Test Driven Development) refers to creating an individual test first and then creating the code for that test. This is opposite of creating code and then at some later date, creating unit tests. Proponents of TDD claim that it promotes looser coupling and better code coverage. I agree that junior level developers would benefit from this; however for senior level developers it simply creates unnecessary work. I find that it is easier to create loosely coupled methods in a domain class and then simply generate the tests for those methods using a tool such as NUnit Test Generator. After the tests are generated, single tests can be run against the corresponding method in the domain class. Without code generation for unit tests, the developer has to manually create the unit tests, since the code does not exist yet. That is just about as fun as manually creating NHibernate mappings.

NUnit Test Generator is a great way to quickly create unit tests for NUnit, csUnit, MbUnit, or Microsoft Unit Tests. It does things like min/max/null testing too. Simple interface but powerful output.

The reason why TDD works well for junior level developers is that they tend to forget to write the unit tests at all. When you write the tests first, you can never forget to write them. However senior developers are used to checking the code coverage with tools such as NCover and TestMatrix. It can't be any more obvious, the untested areas of your code are highlighted in red. Another way to ensure that tests are being written is to set up a code coverage threshold on your continuous integration server. Cruise Control .NET can be set up to automatically fail when the code coverage falls below the threshold.

TestMatrix is a great testing tool that is integrated right into Visual Studio. It has a test runner, captures performance, and performs detailed code coverage. It is like a Swiss army knife, I wouldn't be surprised if the next version tested for cancer. I like it better than what is built into Microsoft Team Suite.

Below is an example of how TestMatrix highlights code slightly different than NCover. The covered areas are in green, the uncovered areas in red. The little yellow progress bars indicate the largest performance hit for particular lines. In the example below we see that logging the error is slightly more expensive than throwing it again.

When it comes down to it, your client wants the highest quality code at the lowest price. They could really care less if you wrote the tests first or the code first. A good way to blow money and time on your project is to write your unit tests manually. If you are on a fixed bid enterprise project, this can mean the difference between success and failure. I have seen it first hand.

Having unit tests with good code coverage ensures reliability but it does not ensure maintainability. You can still have a junior developer do TDD and create crap code. The only way around this is to do paired programming or code review.

Which ever way you choose to do your tests, creating the tests first, or creating the code first; you should always write them. There is no excuse for legacy projects that do not have tests. They should be created at the first opportunity. Unit test generation solves the problem to generate tests for legacy code and speeds along the process for new projects. To all of my fans, Q'apla! Which means success in Klingon.


Steven Harman said...

I think you've missed the point of TDD - it's not all about code coverage or proving correctness. TDD/BDD is a _design_ tool.

TDD == Test Driven DESIGN.

By writing your tests first you are forcing yourself to *think* about how your code is going to be consumed.

And you'll naturally write code that it is testable by default - meaning its loosely coupled and highly cohesive. Those two things are also know by another name: Separation of Concerns.

All of these things lead to code with all of the *-ilities we strive for, including:


And the biggest of the all, maintainability.

p.s. - Please turn on anonymous comments so readers can leave feedback w/o selling our souls to Google.

James Bender said...

High code coverage numbers are not a panacea that are going to assure high code quality.

You need to be smart about what you are metering. And this is another reason to write tests first with some thought put into them as opposed to using a tool which simply reflects on the public interface of a class and creates test stubs.

The thing is, I want to test methods that do something. Business logic mostly. I don't want hundreds of tests to verify every "simple" get and set is working correctly; those defects should be detected by the business layer tests that use those properites and if a property (or any piece of code for that matter) never gets hit by code coverage you need to ask yourself "Does this code _really_ belong here?"

For me, TDD and code coverage are also about detecting and removing superfulous code that provides no value, create brittleness and only causes me to incur a maintenance expense.

Steve Horn said...

@ steven harman:

It is possible to achieve separation of concerns and all of the *-ilities without TDD. I was taught from the beginning that no matter what code you write, you are writing it as if it were an API that was going to be released to any number of other developers. TDD can be used as a set of 'blinders' to keep you focused on achieving the 'separation of concerns' goal, but it is not required. And this is where Greg's comment about Sr. vs. Jr. developers comes in. A Sr person will understand a well developed API vs a bad one, and be able to achieve his/her goals without being bogged down by writing/maintaining unit tests.

Greg Finzer said...

Thank God I finally got some people to post on my blog! Next time I will say that Star Trek is better than Star Wars and we will really have a war of words. :)

Steve Harman, I wholeheartedly agree with you about the wonderful results of TDD. My point is that after creating well crafted code for a long time, you already know what your code will look like before you write it. Why not save time and skip to the results since you are going to arrive at the same place regardless if you are using TDD or plain ole unit testing. Sorry about not enabling anonymous, if I do that there will surely be blog post robots that come and try to sell all of us viagra.

James, I agree with you that high code coverage doesn't necessarily mean high quality. A developer can certainly create spagetti code with a lot of unit tests that pass. However I like to test simple properties and include them in my suite of tests. It increases the code coverage number, that the team lead is so concerned about he has a vein poping out of his forehead. In an enterprise application with business calculations underneath the getter, it really helps for regression testing. James, you make an excellent point that code coverage can definately point out code that simply doesn't need to be there.

James Bender said...


The point of writing unit tests is not to increase code coverage. That's a happy side effect of the process.

The reason for writing unit tests is to verify that your logic does exactally what it is suppocsed to and nothing more or less. Properly metered ode coverage is an indicator that that is happening.

I understand your point about wanting to test any logic in a get/set explicitly. But, consider this:

If the only place that logic ever gets called from is your explicit unit test for that property than a) you'll never know b) it's code that probably shouldn't be there and c) you have spent time to write a meaningless test.

Whereas, if it's covered by a test of your business logic you a) know it's correct b) know it's needed and c) didn't have to write another specific test for it.

Sure, you might need it in the future to create a "more-complete" API, but waiting until it's needed is going to save you up front development time, and a insulate you (a bit) from business induced change.

Say you created it now but you don't need it for a year. What happens when that time comes and you find that the business has changed it's mind and now you are re-writing it when you could have just written it once.

Granted, this is also tied in with agile, but you probably see my point; you're not really saving any time. In fact you're wasting it.

If you had written the test first, you may probably have never written the inital property (until you needed it) and would ensure that it is acutally what the business wants. Proximity breeds accuracy.

The problem with the "carpet bomb" approach is that you end up with a code coverage number that is artificially inflated and completely meaningless. If your CC goal is 80%, and 78% of it is testing simple gets/sets your burying all your metrics for your business layer. You have too much noise and not enough signal.

Part of our job is (should) be evangailism, which means educating clients about how things work and not simply feeding them a number that makes them content.

Greg Finzer said...

James, we'll have to agree to disagree on that one. To me clicking a button to generate all the getter/setter tests is easier than manually creating tests for just the important ones. We are to test until we are not afraid. Skipping the testing of seemingly unimportant business classes doesn't sit right with me.

When you say that there are properties that are created that aren't needed, is this because the business entities were created with MyGeneration or another code generation tool? In other words, it originally came from an unused column in the database?

I like the carpet bomb analogy; my dad flew in the Air Force in a B52. :)

Steven Harman said...

>> Thank God I finally got some people to post on my blog!

Open up the comments and I'm sure you'll get a lot more. If it weren't for the fact that I already had a Google account for email, I wouldn't have posted a comment either. Plus, now I can't get any linkage back to my blog. :)

Steven Harman said...

@SteveHorn - Yes, it is possible to achieve those goals w/o TDD, but if you use TDD _as a tool_ you'll find you arrive at the goal with less code (b/c you're only writing exactly what you need) and more confidence in said code.

@Greg - I don't know what you mean by "plain old unit testing", but I'm going to assume you are referring to Test After Development. In that case - yes, TAD is also OK... I just much prefer TDD as its a style of designing and coding that is pleasing to me and makes it fun. I'm all about the instant gratification! :) So as long as its you, the developer, writing the tests yourself and ensuring that those tests actually provide real value, I'm OK with it.

What I'm against is the idea that you can just sit back and "click a button" to generate your tests. The tests generated by such tools are not nearly strong enough and whats worse - they don't document how the system should work. Well written code should document itself - and unit tests should serve as a way to show exactly how you expect the code to behave.

@All - Please, no more arguments based on the premise that a "Senior" developer has some magical powers allowing him to produce perfect code that looks exactly like the "picture" he had in his head at the start of the project.

Oh... and if you're looking for a little research - Research Supports The Effectiveness of TDD :).

Brian H. Prince said...

Can you all please get back to work? I am going to invent CDL (comment driven layoffs).


Greg Finzer said...

@StevenHarman I am talking about TAD (Testing after development). Our in house template based code generation tool generates NHibernate mappings, the NHibernate Data Access Layer with CRUD + Search, the business entity, the business logic, NUnit Business Logic Tests, and a basic ASP.NET page. The CRUD Business Logic Tests require some tweaking and then you can start adding other business logic.

As far as seperate test generation tools; yes obviously the tests don't even pass when you generate them from Microsoft Team Suite. The developer gets a list of stubs to fill in the blank. The same with NUnit test generator.

Normally I generate the stubs for the business logic, fill in the blank for 90% of the behavior of the methods, then check the code coverage to see what is not being hit. With NUnit Test Generator it extracts the documentation that you have already built for the method and documents the test for you. It makes it obvious what the test is testing.

I looked at the Research Link. That is really great that TDD can help undergraduates be more productive when developing. ;)

Sorry, I couldn't help but take a jab at that. :)

James Bender said...

... and if code is not being hit what do you do?

If you write test to check it you have no way to verify that the code is used at all in production. Now you are incuring expense to keep code that provides no value. All your doing is proving you can write a test for a method that may have no use and provide no value at all. Not a productive use of time or money in my opinion.

Whereas if you have written your tests in advance based on the feature you are developing and your tests cover all edge cases, you would probably never have written the uncovered code in the first place, but if you had you would know that it was no longer needed.

Tim said...

Greg...our current situation is a poor example. The code generator did a bang up job getting us nH mapping files and the CRUD ops for a horrific data model. Beyond that, we don't have many unit tests, but have a ton of integration tests. The ASP.Net files generated were chucked quickly, which is what James and Harman were getting at...code was produced that was never used. Basically, it saved us a lot of time close to the db, but the further we moved from the db toward the UI, the less value was returned for the time put into the code generator. (And that was just the first phase, phase 2 we hand wrote a PILE of logic that has very few unit tests going through it.)

I lean to the TDD side of things, but in the end it's just a tool that is available to you as a craftsman. Is it right for every situation? No. It can be used incorrectly and provide very little benefit.

The point I'm getting at is that TDD is a powerful tool and shouldn't be dismissed based on one situation where a different solution worked.

Greg Finzer said...

@All Thanks for all your great feedback today. I am looking forward to many more posts. Hope you all have a great weekend.

Jon Kruger said...

As soon as I saw "Why TDD is bad" I knew there would be a whole slew of comments! :) I guess you've discovered the way to get some more blog traffic.

Greg Finzer said...

@ Jon

Congratulations again on your promotion. Yeah, I am hoping to come up with something even more controversial for the next post. Have a good one.