Test Driven Development (TDD), how it relates to Test First, and what is Test First Development.


Transcript for episode 34 of the Test & Code Podcast

This transcript starts as an auto generated transcript.
PRs welcome if you want to help fix any errors.


Test & Code| Episode 34:

TDD and Test First

Welcome to Test & Code, a podcast about software development and software testing. I’m recording this Sunday morning, December 31, 2017. Yep, it’s the last day of the year. In today’s episode we’re going to talk about test driven development and test first programming, but I’d like that to do a look back on what happened in 2017. I’ve had, I think I counted the other day, I think I’ve only had 7 episodes besides this one come out this year. I was shooting for more than one a week at first, so that’s not that great of a track record, I will try to get more out in 2018. So why didn’t I, what was going on? Well there’s work of course, and trying to spend time with my family and my rabbit who right now, she’s in the room with me and she’s making kind of a lot of noise, so if you hear some weird background noise, it’s probably the rabbit.

So, 2017, it’s been a lot of time working with Michael Kennedy on Python Bytes. We did 52 episodes in 2017 that’s great, and I even showed up earlier this month, we recorded the year in review for Python and that went out on Talk Python, that was nice. And then, because of my involvement with Michael and doing podcasting, I was invited by him and others to participate in a booth at PyCon this year, a 2017 PyCon here in Portland and we had a booth, and I got to talk with a lot of great people. And during that 2017 PyCon, the book that I was writing Python Testing with Pytest was released in beta form, and that was available for PyCon attendees first and then shortly after everybody else. The first edition, hard copy was released in September and it really had a great reaction to everybody, I appreciate everybody’s support of that book and continued support and continued telling me that it’s affecting them and people send me pictures of the them with the their copy of the book all over the world and that’s pretty cool, a lot of places I’ve never been. We’re going to be at PyCon again in 2018, Michael and I and a few others. That one is not in Portland, it’s in Columbus, Ohio I think.

I also had a business trip to Munich and while on my trip, I had one evening that was available and I reached out to the Python user group in Munich, I asked if I could come talk with them and showed up and we had a good conversation about both, about Pytest and about really whatever anybody had a question about. It was a fun experience and I’d like to do more talking with smaller groups because there’s a lot of great experts out there that aren’t out there like me trying to do podcasts and books, that are just in their day to day lives helping everybody else out. And there were quite a few people there help me answer questions about testing there, that was a fun experience.

Yeah, so it’s been a fun year, next year, the 2018 promises to be just as fun I think. But today let’s go ahead and start looking at test driven development and test first programming as part of that. I’ve had a lot of questions asking me more about things like the not just the mechanics of how to write a test, but what do we put in there, what should go in there, what’s part of development and, of course, you can’t really talk about testing too much before stumbling across test driven development. And so I’m going to have probably several episodes talking about various aspects of it, because I don’t think I view test driven development same as a lot of people.

Back in 2015, I wrote an article called— this is not a very descriptive title, Test First Programming/ Test First Development and I wanted to cover some of that because I think it’s relevant to the conversation about test driven development and I’ll add some commentary and I’ll add some more notes. The ideas of test first programming and test driven development are often muddled together. However, test first is powerful enough that I think that it stands on its own and should be studied separately. Test driven development in many other Agile practices build on test first, so I think it’s not just about remembering the past that we should study test first, but the lessons for test first are still very important. The concept it’s quite simple— before we had XP and tester in development and Scrum and everything, we had some notions of waterfall. There’s the idea that you’ve got some specifications and you write some code, you write some tests, you have some Q&A people or some other group or the developers write tests afterwards to make sure it all works, and that usually blows up because the testing takes like really long and because you find problems. So the idea around test first is let’s write those test first, before we get into that development and it will help us during development. So essentially, I have seen it written down something like First write the specification for a feature, then write some tests to fit the specification and then write code until all of the tests pass. That’s kind of essentially it, really, but I don’t think that those three steps are quite as helpful as they need to be.

So I’ve broken it down, I’ve got a seven-step plan that I’ve listed, but I want to talk first about the specification. This idea that you have a specification to start with is a little flawed, if you have a specification it might be vague, it might be incomplete, it’s probably subject to change during the course of development. It might be at a completely wrong level of specification for what you need right now, there’s not enough detail or there’s too much detail. It also might not cover the failure cases, it just tells you how things ought to be working. And the specification for a feature might contradict other specs for other features, but the most common problem is there really is no specification. It might just be a simple phrase in an email requesting a new feature or just a task assignment or a bug report or a story in describing what you have to do and that’s not much of a specification.

Instead of those three steps starting with specification, let’s think about it differently. Because really, test driven development and test first is often not about the specification but about what the developer thinks that the software should do. So here’s the seven steps: first, think about what you want to do. Then, think about what it looks like from the customer’s perspective or through an API or through the function, through the calling function, what is your functionality going to do. Think about how to test it. Now a lot of people freeze up when we say think about how to test it,  but just think about it if you ran it how would you know that it was working? Next, just write the happy path test case, that’s kind of how you write it if you’re going to test it in the rapl, or through the user interface, just write that in a test case. Then write your production code— you’ve got your code, you got your test and if that passes, then expand your tests to have more complete behavior coverage. And then write more production code to pass all those more tests.

So those are the seven steps and they’re listed in the old article that I’ll link to. However, each of these is way more complex than just one line implies. Each one of these is done on every feature every behavior, every fix. The steps are in order, it is a progression. There are also loops iterations, revisits, parallelism is possible for some steps and welcome. That means you can throw more people at the tests and implementation at some stages, it might make sense. So let’s jump into them a little bit more.

Think about what you want to do. Collect requirements if you have them for a feature. Define the feature, it should at least be defined in your head. Try to understand the customer problem that you’re trying to solve, what’s the real, core requirement to solve their problem. Even if you don’t have written requirements, you have and idea of what the thing ought to do and I think it’s important to write that down, even just on a couple pieces of paper to understand just so you have an understanding of what the big picture of the feature is before you jump into the code. So now it is a great time to nail down what the minimum viable scope for the feature is, not all of the different corner cases, but the first iteration should stick to the minimal scope. You can expand it iteratively after you have something to work with.

So next, think about what it looks like from the customer’s perspective, what’s the API going to look like? Does this solve the customer problem? Is it clean? Is it awkward? Can you imagine using that API or that user interface? Is it easy to explain? What’s it going to look like? What are the input elements needed? Does a customer have those input elements, or the the calling function? Are they really needed or are they just things that you want to pass in for other reasons? Is it reasonable for the user to know that information that you’re asking for? So these are all API questions, we’re talking about design here. I think it’s really good to think about design in this manner all the time, just get in the habit. If you’re writing this stuff down and you are thinking maybe you’re just trying to convince yourself that it’s right, that maybe there’s some fishiness to it or maybe it’s still a little awkward— get some feedback. Grab a colleague out of the hallway, ask him to look at it and take a look at some of the stuff, even write this stuff down in a document and push it through the code review to try to get other people involved and comment on it. It’s way more important to get feedback early if you can. I recommend even doing that during implementation, you can push it out, say I want some feedback on it, and then just move ahead to the next stage, but as soon as you get some feedback it might alter it.

Third— think about how to test it. How do we know if the feature works? How do we know if it’s not working? What can go wrong? Can we check for that? What is the mission critical part of this feature? How can this be tested? Is any API missing to allow automated tests to interrogate the system for error conditions? This is important, there’s a lot of times where things could go wrong, you could think about how something could go wrong, but we haven’t included an API even a debug API to check to see if that was the case. This is oftentimes a good time to start adding some API that’s not available to end users, but it is available to tests and developers. What’s the riskiest part of the feature and how can that be tested? I’m asking a lot of these questions because you can just assume for complex functionality and complex systems, you can’t test it all, you can’t test everything, so we’re trying to figure out what are the most important test cases. And while your head’s in the game of trying to figure out what the API is, what the customer needs are and how the things are supposed to work is really the perfect time to ask yourself those questions of what are all the test cases that are really important to be covered for this. If you’ve got people around you, you can just give somebody a call or talk to him, go ask somebody or pull together a few people in a meeting and talk about it.

So about the happy path test cases, this is the fourth step. Writing the tests allow you to use the API. If the API is cumbersome the tests will be a pain to write. If so, change the API, make it easier to use. One of the great things about writing tests during and before development is that you get to use the API before it’s expensive to change it. Write enough tests to satisfy the following. The feature is being tested the way the customer would use it, and every function of the API is being used by at least one test. Are there functions left over that you didn’t use? Can you remove this API function without limiting the user? If you can, then do so. Take note of all the things that you needed to test that weren’t captured in step three, while you are writing your happy path test case and looking through the needed or unneeded APIs you might want to change the full test coverage that you needed.

Now we’ll write the production code. Write the production code to make one of the test passed, keep going until all the tests passed. Since I use Pytest I rely heavily on Xfail so if I’ve written all the tests I have no code, they’re all expected to fail, I mark them all as Xfail.

I can do this, I don’t always do it, but it is a possibility. Take note of the things that you’ll need to test that weren’t captured in the steps three and four, what are the three and four— the thinking about how to test it and the happy path test case. So while you’re writing a production code, you’re going to stumble across things that need more tests, you can just add them to the list of things that could be tested, you don’t have to write the tests right away. Take note of the features that you want to add but aren’t needed to make the test pass. There’s all all sorts of, it’s really natural to want to add bells and whistles and new features and once you get them, you’re like, “Oh my gosh, the user might want to know the how many to do items there are or other neat things, we should just add that.” Okay, it’s good to think about that stuff, but don’t just go implement it right away, just capture the ideas somewhere in your to-do list or someday-maybe-list. Do it in a future iteration. Especially, it gives you the opportunity to prioritize things, but while you’re developing code, don’t ignore your ability to think about new functionality that might be easy to add or notes about functionality that you know you’re going to add, but they might be kind of hard, but write those down, don’t lose them. So number six, expand the tasks for more complex behavior coverage. So we’ve got our set of tests, we’ve got a happy path test case, we’ve got some production code that makes a test pass. Keep going until all the tests passed, that was part of step 5, so all of our tests are passing now.

Now we want to expand the coverage. So you look at your set of tests you have, the feature, the API, and look at the feature to see if you’ve covered all the behavior. Are all the error conditions tested? Have you made sure that the critical and risky parts of the system are tested fully? So while you’re looking at that, you might notice that there are new tests that need to be written. And this coverage bit is a little bit tricky, there is a lot of skills and tools that we’ll talk about in future episodes to try to come up with how do you tell if you’ve had enough tests. So I’m going to kind of skip over this part a little bit because it deserves a full episode or two or three or more in itself.

So you’ve expanded the tests based on skill sets that you’ve learned in future episodes— how vague is that? Anyway, so number seven, write the production code to make all the tests pass. We’ve just sort of iterated, we’ve added more tests so we have to make sure that we had more code to make those pass. Are these really my seven steps, they seem weird. I should rewrite this article. Anyway, all of this sound for a very mission critical functionality, tall this extra process might be a lot of extra work but it will save you going down the wrong path, basically it’s as you’re driving down the highway it’s worthwhile every once in a while to pull over to this rest stop and check a map to make sure you’re still going in the right direction or use a GPS, that’s what I think writing tests while you’re developing code is like. And I think that that’s a good thing.

So why is it important to do this? Why test first? It’s undeniable that for complex software, regardless of whether or not you use TDD or test first, thorough tests are essential to have in place before you call a software done. You can write tests before the code, you can write tests after you write the code, but there are really good reasons to write the test first before you write the code. You can use the test to guide you in what code to write next. You can use the tests to tell you when you’re done coding and to help avoid feature creep instead of adding all the features while you’re developing, you’ve added those to the list and hopefully had some more feedback to try to figure out whether or not the feature is important before you just throw them in there.

If it is the same person writing the tests and the production code, then the assumptions you make while writing the code will color the way you write the tests so that you just write those good tests and you’re not really thinking about the customer, you’re thinking about your code, and you’ll miss ways to use your code, that could be problematic. Writing the tests before you read the code will mitigate this somewhat, there are still problems of having one person thinking about the tests and the code at the same time, thinking about writing the tests can help you understand what the test specification should be and which part of this is best. If you have a specification that’s vague or has holes in it, or has contradictions, it can help you to try to look at getting those answers answered by the people that can answer that earlier on in the process and not later. Since you’re using the API during test development, cumbersome parts of the API can be fixed before the API is implemented, holds it in consistency in the API and the specification can be found out early. The tests will get written.

If you put off tests until after you think your code is done, and you’ve maybe dog fooded or demoed it manually, there’s a reasonable chance that you will ship this code before you have your automated test done, and then your customers will find the bugs, that your testing could have caught. That’s what people think about, when they think one of the things that people think about is you could potentially ship bugs. But there’s a larger risk there that I think is a larger risk, and that is you’re going to put an API in production that might need to change. If you write the tests after after you’ve shipped, during your testing you might realize that the API really sucks and you have to change the API. Now you’ve got a problem with backwards compatibility if you change the API, what do you do, you have to make a decision as to whether or not you’re going to keep supporting the old API and have a new, easier to use API or if you’re going to make backwards compatibility changes. It’s way better to make those decisions before you’ve shipped that release to customers.

So be honest, you’re going to do some testing anyway to make sure your code works— you’re going to do manual testing, your dog fooding or just trying it out. Why not just go ahead and write some functional tests first and use those to help develop and debug your code. I know that a lot of Python developers are out there testing things with the repl. Instead of testing with the repl or just right after you’re done testing things out with the repl, go into your text editor and add a task to do the exact same thing you just did in the repl, just do it in a little test function and throw an assert at the end and if you’ve broken things later, it’ll just fail and it’ll be good. So, my description of what test first development and test first is will sound a lot like test driven development. I do think the line is blurry and to be honest, my version of TDD resembles what I presented here a lot closer than a lot of versions of TDD. I think the status quo of how we’re teaching testing to everybody, it gets me riled up and especially in the evening if you talked with me about it, it’ll be animated and you’ll be amused.

Anyway, I don’t want to leave it right here. I want to cover one more thing. That is test first development and it does fit together with test driven development quite a bit. Instead of leaving you hanging about my thoughts on test driven development, I’m going to point you to a lot of the thoughts that I captured in an article I called Is TDD Dead? And of course, I didn’t come up with that name, there was a talk in 2014, a series of talks that was kind of started off by a Rails conference talk by David Heinemeier Hansson talking about TDD being dead or long live testing. And then he wrote a bunch of blog posts about it and then there were these conversations between him and Martin Fowler and Kent Beck to talk about it. I’ll write up about what my opinions are and how this all went and links to all these talks. But I want to pull a couple of bits out of it.

My opinions: I love test driven development, but my version of test driven development is probably quite different than yours. Ever since I’ve read about test first programming and TDD in the early days of extreme programming, I grabbed onto it like a shipwreck survivor holding onto a flotation device. When taken as just a darn good idea TDD is super, when taken as the only true professional way to do things, it’s just as annoying as waterfall. I think TDD is most useful when used at a system level. The system is where the users interact with the software, so that’s where the most of your tests should be and the tests are most important. Focus on what customers need your new functionality to do and whether or not your functionality is fulfilling that. The first test should be is your feature actually satisfying those needs as a system functional system tests from the user perspective, isn’t that what’s most important? I want my code to be Agile, to be adaptable, to be able to refactor it whenever I want to. It’s the user system tests that I really don’t want to break. A redesign of part of my system may break many unit tests, but it better not break any functional system tests. So isn’t system level the the level that you should place the most effort in your testing? I think so.

So most of my contradictions between me and the rest of the world is where the tests should go, should they be at the unit level or the system level. And I think most of them should be as high in the system as possible. But both functional system tests and unit tests are important. I think functional is a little bit more important, because that’s what the customer cares about and it helps me refactor more, but unit tests are great too, especially the tricky parts of your system. I heard recently a great rule of thumb— if you feel like some class or method needs an explanation with a block comment, this bit of code needs a unit test or maybe just needs refactored or a better name, but anyway. Also unit tests the parts that are likely to fail, the parts you don’t understand a lot, the parts that keep you keep getting questions about. And then bigger parts, subsystems, sub libraries, different steps in a stage development, test those individually. These are all great things to do and I’m not telling you to not do them. I am going to leave it at that for now, because I think this has gotten a little bit long winded and preachy.

Thank you so much for your support in 2017 and I look forward to talking with you about this and a lot of other great topics in 2018.