Preface

Lean TDD is an attempt to reconcile some conflicting aspects of Test Driven Development and Lean Software Development.

I’ve mentioned Lean TDD on the podcast a few times and even tried to do a quick outline at the end of episode 162.

This post is a more complete outline, or at least a first draft.

In audio form

The initial version of this post is also available in audio form as Test & Code, episode 180.

Why Lean TDD? What’s wrong with the old TDD?

If you feel you’ve got a good understanding of TDD, and it’s working awesome for you, that’s great. Keep doing what you’re doing. There are no problems.

For me, the normal way TDD is taught just doesn’t work. I’m trying to come up with a spin on some old ideas to make it work for me. I’m hoping it works for you as well.

I’m calling the new thing Lean TDD. It’s inspired by decades of experience writing software and influence from dozens of sources, including Pragmatic Programmer, Lean Software Development, Test-Driven Development by Example, and many blog posts and wiki articles.

The main highlights, however, come from the collision of ideas between Lean and TDD and how I’ve tried to resolve the seemingly opposing processes.

TDD

What’s Test Driven Development?

Let’s run through TDD quickly, at least one flavor of TDD.

TDD can be remembered as Red, Green, Refactor:

  • Red: Write a failing test
  • Green: Write enough code to make the test pass
  • Refactor: Change the code into something you’re proud of.

The colors, if they are not obvious, refer to the colors test runner tools often use to show passing and failing tests. Green for passing. Red for failing. Some test runners show a progress bar that starts green, and as soon as one test fails, the whole bar turns red. We like green bars, or green checkmarks, or whatever. Green is good. Red is bad.

Just for sanity sake, I thought I’d grab a definition of TDD from somewhere that at least sounds authoritative.

Here’s a definition straight from an Agile Alliance page:

“Test-driven development” refers to a style of programming in which three activities are tightly interwoven: coding, testing (in the form of writing unit tests) and design (in the form of refactoring).

It can be succinctly described by the following set of rules:

  • write a “single” unit test describing an aspect of the program
  • run the test, which should fail because the program lacks that feature
  • write “just enough” code, the simplest possible, to make the test pass
  • “refactor” the code until it conforms to the simplicity criteria
  • repeat, “accumulating” unit tests over time

Ok, I’m just going to keep reading now, as the next section on that page is “Expected Benefits”:

Expected Benefits

  • many teams report significant reductions in defect rates, at the cost of a moderate increase in initial development effort
  • the same teams tend to report that these overheads are more than offset by a reduction in effort in projects’ final phases
  • although empirical research has so far failed to confirm this, veteran practitioners report that TDD leads to improved design qualities in the code, and more generally a higher degree of “internal” or technical quality, for instance improving the metrics of cohesion and coupling

TDD has roots in Test First Programming

TDD was once called Test First Programming. It’s interesting to think about this name for a bit because it highlights what TDD was a reaction to.

One way software can be written is some form of Waterfall process:

  • Find out what a customer wants and encode that in a set of requirements.
  • Write up a design for the system
  • Code the system
  • Document it
  • Test to make sure the system meets the requirements.
  • Debug all test failures and change the code as necessary and go back to testing it.
  • If the documentation doesn’t match the code, refer back to the requirements and see who wins. Or change the requirements. Maybe change the docs.

The debugging part can end up being half of the project, especially if you end up having to rethink the architecture to hit a requirement you forgot about.

The test part can be either manual testing or automated. It was often manual, and set in “scripts”, as in documents that told a test engineer which buttons to push and what to look for, not automated scripts.

However, let’s be generous and assume automated tests. If the tests are written and run only after the software is mostly done, that’s still the total opposite of TDD.

A clever developer might think “Wait a second, this all would be easier if we had the tests in the first place during development!”. Yep. It’s easier to pass a test if you have the test before you start coding. Then you can just code until all the test pass, and you’re done.

Except, writing all of your system acceptance tests is really difficult without a system. Also, requirements can change as the system is being developed, and then many of the tests need changed, and it’s hard to know which ones, when most of them fail early in system development.

So the TDD idea is to write the tests one at a time. Then implement just enough to get it to pass, and continue. Also refactor regularly while your tests are passing. Problem solved, right?

Except, that’s not how TDD is normally taught.

The tests in the waterfall model were system level acceptance tests that matched the customer requirements or other system constraints.

TDD tests are “unit tests”. There isn’t a lot of discussion with TDD and in TDD training materials, that I’ve seen, that talks about utilizing system level tests as part of TDD, or about tying tests to requirements.

They’re just this fuzzy term of “unit”.

What’s a “unit”?

In the early writing from Kent Beck and others in the Extreme Programming world, a unit test was a test written by a developer. This is, I assume, as opposed by the tests written by the QA team.

There’s some fine print in a lot of TDD literature that says the unit tests are not a substitute for independent verification. This is one thing that’s always bothered me; this hand wavy someone else is also gonna test it stuff. What if I don’t have someone else? And isn’t the “handing it off to QA” what we were trying to avoid in the fist place?

Back to “developer written test”. This doesn’t necessarily mean that a unit test isn’t a system test.

However, lots of consultant-like folks, reinvigorated by the popularity of SCRUM in some cases, and many books and blog posts specifically don’t include system tests in unit tests, even if a developer writes them.

I personally thought that the term “unit test” was just a byproduct of all of the unit test frameworks, like Java’s junit, Python’s unittest library, etc. They have unit in their name. So a test that they run is a unit test. But these tools can run system tests just fine.

What’s a “system test”?

Even if I’ve given up on the term “unit test” as meaning anything other than a tiny test of one function or class, I don’t want to give up on “system test”.

There is a lot of TDD writing and test pyramid writing that treats “system test” as this nasty thing becuase you are trying to use third party tools to click buttons and enter text and stuff for you through the GUI or web interface of your application.

Frankly, if that’s your only choice, then your architecture is broken.

I would prefer to separate my GUI or CLI or web interface from the lion share of the code through an API at a high enough level that I can write my system tests in Python and call the API.

There will still be GUI or web interface tests. They are not system tests. They are tests of the GUI or CLI or web interface.

I still might lose this battle though. So many people have seen the test pyramid, believe it without thinking, and believe “system tests” should be limited due to high maintenance cost.

Fine! I’ll call my thinking “feature tests”. I’m using the term feature test, borrowed from a Twitter engineering blog post. A feature test is a test run against the public API verifying a service or library as the customer would use it. It uses as much of the system as possible, only mocking or using some other kind of double where absolutely necessary to avoid an expensive service or something. But I’m getting off topic. Feature tests are not normally taught as part of TDD, even though I believe they were intended to be there as part of the classical interpretation of TDD.

I think this is a good time to talk about lean software development.

Lean Software Development

I was reading about Lean Software Development about the same time I was reading about XP, TDD, and Pragmatic Programmer in the early 2000’s, and Lean made a lot of sense to me.

Lean overview

I read about it in a book called “Lean Software Development” by Mary & Tom Poppendieck. They start with discussing Lean in general. It comes from Lean manufacturing, from the Toyota way, and stuff like that. Lean has been applied to lots of industries, and the Poppendiecks attemted to apply it to software.

I think they nailed the “lean principles” pretty well. However, the “lean practices”, which take up most of the book, seem a bit wacky to me. I think you can get a decent overview by reading about the principles on the Wikipedia article on lean software development:

  1. Eliminate waste
  2. Amplify learning
  3. Decide as late as possible
  4. Deliver as fast as possible
  5. Empower the team
  6. Build integrity in
  7. Optimize the whole

These all sound great. Let’s zoom in on “eliminate waste” before continuing, because lots of people get confused about it.

Eliminate Waste

In the Lean way of thinking, everything that is not adding value to the customer is waste. Let me repeat that. Waste is everything that is not adding value to the customer. There are activities that add value to our development team, or to our company. However, even if we value it, if it doesn’t add value to the customer, it is waste in a “lean practices” sense. So don’t get your feathers ruffled too much if your primary job falls into “waste”. Keep an open mind and think about it a bit and you’ll be fine.

What is waste, in a Lean sense?

Lean software development lists 8 categories of waste:

  1. Partially done work
  2. Extra features
  3. Relearning
  4. Task switching
  5. Waiting
  6. Handoffs
  7. Defects
  8. Management activities

These all seem logical, especially if you read the wikipedia article or the first chapter of the Poppendiecks book.

These are not completely avoidable. But I think we can agree that a lot of stuff on this list doesn’t add value to customers, but does add cost or time to the completion of our code.

How does TDD fare against Lean?

Against the principles

I would say that of the 8 principles, TDD does ok with:

  • Empower the team
    • I think. So that’s nice.

TDD is probably neutral with:

  • Eliminate waste
    • Will cover that separately
  • Deliver as fast as possible
    • We can build up a product, if we prioritize the testing and features to implement early, to have working systems up quickly. Kinda requires some oversight on prioritization, and a mindset for getting complete systems working, and milestones, etc. Not really part of TDD, but often how it’s practiced.
    • There’s a lot of re-work involved with rigid TDD, which I think totally gets in the way of “as fast as possible”. Also, tons of unit tests can get in the way of changing course.
  • Decide as late as possible
    • We don’t make design decisions until we need them for the test/feature we’re working on.
    • We don’t do a lot of up front API design.
    • But a lot of unit test buildup can get in the way of medium to large refactoring if not kept to API endpoints.
  • Amplify learning
    • If we tackle the hard stuff early, we can learn more about the problem space early.
    • We also kind of intentionally ignore experience for each part of the system until a test tells us something isn’t going to work.
  • Build integrity in
    • Lot’s of test, right? But unit tests only don’t tell us if the product works.

TDD is kinda terrible at:

  • Optimize the whole
    • I think we’ve covered that. TDD teaching often has the mindset of “as long as all the parts work, the whole should work”. I just don’t buy it.
  • Eliminate waste
    • hmmmmm

TDD and Waste

To be fair, the kind of waste Lean talks about are a mixed bag.

  • Management activities
    • neutral, maybe. TDD doesn’t really affect that.
  • Handoffs
    • Better if devs are also doing feature tests
    • Neutral if we’re still handing off code to a QA team for final validation
  • Extra features
    • TDD is good with this on the larger scale, I guess. Maybe.
    • But on the small scale, not so good.
      • Devs can be pretty creative in thinking up imaginary worst case scenarios that we could write into tests to make sure a helper function handles really large data sets and imaginary numbers and such.
      • But what if the system only ever calls the helper function with a dozen or so elements, all small integers? You may have just over-designed the hell out that function because you want to write bullet-proof code. That seems like it’s inching toward waste. Or sprinting.
  • Task switching
    • This one kills me. TDD is almost founded on it. One test, then code, refactor, repeat. I do agree that you shouldn’t get too far ahead of yourself in either tests or code, but sometimes you’re in the zone, coding up a bunch of test cases. Why not? Why force yourself to switch to application code?
      • Well. Because you might be going down the wrong direction. Fair enough, so don’t go too far.
      • Another reason often given is to keep the test suite green. You don’t want your 5 new tests failing and then only get the first one green before the end of the day. Well. Why not? Throw a skip decorator on them with a “not implemented yet” reason, commit to your branch, and call it a day. Who’s harmed by that?

Ok. I’m going to skip ahead to something not on the list so far, but probably should be:

  • rework

TDD is all about rework. That’s not necessarily a bad thing. I usually don’t know the best way to do something until I’ve done it once. But do I have to rewrite it 6 times? 8 times?

Unless I’m reading it wrong, the refactoring part might mean I have to change a lot of code a lot of times, possibly once for every completed test. This at least has the potential for waste.

What about all those design benefits of TDD?

Supposedly, since you are writing small testable bits, the modularity that gives you leads you to a better design. Maybe. Even the Agile Alliance admits that empirical research has failed to prove this one.

I say go ahead and look at more than one test, more than one requirement, to determine how to design some part of system, but be willing to change the implementation if it starts not working out.

So, I think know that a lot of refactoring and rewriting may happen. Have tests in place to support those refactor efforts, but don’t restrict developer experience and creativeness in the name of “the simplest possible code to get the tests to pass”.

By the way, small unit tests are inherently tied to implementation details, not behavior. This actually hinders refactoring, not enabling it.

So let’s flip that around, and focus on testing behavior, not implementation.

Which brings us to Lean TDD.

Lean TDD

Lean TDD is like TDD but with, I think, more pragmatism.

Lean TDD overview

Let’s throw down some bullet point features so it’s not too hand-wavy:

  • Have fun.
  • Start with tracer bullet tests and build out.
  • Test behavior, not implementation.
  • Develop automated tests and source code together.
  • Focus on your most valuable tests.
    • Write feature tests.
    • Test your USPs.
  • Write tests when you need them.
  • Refactor when you need to.

Do we need more than that? If not, run with it. If a refresher of all of these is necessary, well, let’s get into it.

Lean TDD in a bit more detail

  • Have fun.
    • We got into coding because it was fun. Make sure your testing is fun also, or you’re not going to want to do it.
  • Start with tracer bullet tests and build out.
    • For new features, start with tracer bullet feature tests that go through as much of the system as possible.Then build out those test cases, either at the feature test level, or other levels, as is necessary.
  • Test behavior, not implementation.
    • You want to be able to refactor. Even if it’s big chunks of code or entire modules or sub-sytems, or move functionality from one part to another, if it makes sense.
    • You can’t do that with reams of tiny unit tests that focus on every detail of your implementation. Keep tests about behavior. If you have tests at lower layers, try to keep them at the highest level of API that makes sense.
    • I get trying to keep tests fast. There are ways to do that with larger tests, like in memory databases, local mock services, and reams of other tricks.
  • Develop automated tests and source code together.
    • Tests need to support application code development.
    • Maybe it makes sense to write the tests first. Maybe to try out what an API or class interface is like to use as client code before you actually implement it. Great. Do that.
    • But maybe it feels more natural to actually just try out stuff with the UI while your developing something. And you want to capture that manual behaviour expectation in a test after you write the code. Cool. Why not? If you don’t trust yourself to actually get the tests written, then lean on coverage metrics to game-ify it for you.
    • If the code you are writing is not accessible from the UI, you are writing a library or module or function or whatever, why not try driving it from test code? Even if you don’t have asserts in there, test methods are great code snippets to drive other code or APIs. And when you are ready, put asserts in for expected behavior. Pay attention to what outcome you are expecting to happen, what change in the system is telling you that the code is working, and put that in the test.
  • Focus on your most valuable tests.
    • Feature tests are like unit tests for units of functionality. Utilize them. If they still seem too wide of scope to debug failures, then zoom in and implement a similar test closer to the code you think is a risk. That’s, I think, a healthy redundancy of test code. Do remember though that the more small scale you zoom in, the more your test is testing implementation, not behavior.
    • Test your USPs.
      • Make sure you have tests around the unique selling propositions of your application. The reason people use your software over other software. Seriously. Make that test code thorough.
  • Write tests when you need them.
    • Your tests are there for you. Use them. Let them help you develop more solid code faster. Let them help you learn. One of my favorite uses for test code is to test my understanding of the behavior of a data structure, or an API, or a driver or service. These knowledge building tests may not test your system, but are still very useful, and it helps you get in the habbit of writing more tests.
  • Refactor when you need to.
    • One of the great things about all flavors of TDD is the emphasis, and permission, to refactor. Don’t forget to do it. But it doesn’t have to be all the time. Lots of churn in the code base can get confusing. Just don’t forget to do it.
    • The best person to write a bit of code is someone who’s just written it once. Think of all of your initial code as a first draft. Maybe it’s good enough. But usually not.

I’m hoping that Lean TDD makes sense to a lot of people that feel like other forms of TDD just don’t work for them.

Feedback welcome, but be nice

Feel free to send me feedback. But be nice.

If you hate the idea, then it’s not for you. I don’t really need to know that.

But if you’d like to give constructive feedback, there’s a feedback form on my podcast site at testandcode.com/contact. I’m also @brianokken on Twitter.