In this episode, I talk with Paul Ganssle about a fun workflow that he calls pseudo-TDD. Pseudo-TDD is a way to keep your commit history clean and your tests passing with each commit. This workflow includes using pytest xfail and some semi-advanced version control features.


Transcript for episode 174 of the Test & Code Podcast

This transcript starts as an auto generated transcript.
PRs welcome if you want to help fix any errors.


00:00:00 In this episode, I talk with All Ganssle about a fun workflow that he calls pseudo TDD. Pseudo TDD is a way to keep your commit history clean and your tests passing with each commit. This workflow includes pytest, Xfil, and some semi advanced version control features.

00:00:20 Thank you to PyCharm for sponsoring this episode. Pycharm helps me to understand and play with my code. The refactoring tools are amazing. A simple one is just to rename a method and it just gets renamed everywhere. There’s a whole bunch of other cool refactoring tools as well. If I changed a bunch of code, I can visually see the diff of my code and the get repo code, and I can even visually walk through the local history to see all of my changes. I actually love refactoring, and PyCharm helps me have fun while I’m doing it. Try PyCharm Pro for four months by going to testandcode.com PyCharm.

00:01:11 Welcome to Test and Code.

00:01:22 The conversation I have with Paul. We jump right into discussing pseudo TDD, but I think the discussion might need a bit of context if you’re not already familiar with test driven development and pytest TDD has a lot of flavors, which we’re not going to cover. All right, now see Episode 162 for that. Some strict forms of TDD include something like this. Write a failing test that demonstrates a lacking feature or defect. Write the source code to get the test to pass refactor if necessary, and repeat. Tdd doesn’t really usually talk about source control, but conceptually, if you checked in your code between writing the test and the code, that wouldn’t be good, since that test commit would have a failing test. However, if you use Xfail, you can do this. You could write the failing test, Mark it as X fail. Use strict. See Episode 171 for more on this. Now, the suite isn’t failing, so you could commit between each step and your test will be green with each commit. In theory, the benefit of separating these commits is commit size and review burden, but we’ll talk about that more in the show. In reality, at least for me, the software development process is way more messy than this, and not so smooth and linear. So now we’re ready for Paul’s workflow. It uses source control features like partial commits, fix up, and interactive rebasing to make non linear development workflow look like a super clean TDD workflow through the lens of the committee.

00:03:07 So, Paul Ganssle, you’ve been on the show a couple of times. The last time was Episode 171. We were talking about Xfail, and we’re going to talk about a little Xray again with TDD workflow right now.

00:03:23 Is that correct?

00:03:25 Yeah. Well, I call it a pseudo TDD workflow because I am not nearly disciplined enough to do actual TDD.

00:03:32 I can jump right into it if you want it.

00:03:35 Well, we will, but have you tried the rigorous TDD before?

00:03:45 I mean, not in any sort of formal way. I feel like occasionally I’ll give it a try.

00:03:50 But what really always happens is, even if I’m trying to do TDD, the problem is that you write a test and then it’s a failing test, and then you write the minimum amount of code that it takes to fix that test, and then you iterate on that maybe three or four times. And then you realize that something’s wrong with your approach.

00:04:14 You’re like, oh, I didn’t account for this Edge case. Then you have to rip out a whole bunch of stuff and rebuild it. But then you have a bunch of tests that already exist for the old cases. Some of them you have to remove, but some of them still apply. Right.

00:04:27 I still want this to be valid test cases, right?

00:04:30 Yeah. So now you have more tests than you have passing code for. So you have to write more code than you have tests. And then sometimes you’re going through and you’re trying to debug one thing and you end up writing large sections of the code just to get the debugging done. And now you’ve got more features than you have tests for, or you find that you’ve forgotten, like, five test cases. Like, oh, I tested this, but I didn’t test.

00:04:57 What happens if I pass it? A zero or a negative number or a NAN or some other thing like that. So I think in practice, even if you’re trying to do full TDD, it just doesn’t seem like possible to me to actually always 100% be living that TVD life unless you just, like, literally just commit every couple of minutes and then just roll back.

00:05:24 If you have to rip something out, you roll back and then rewrite everything from scratch, which seems counterproductive.

00:05:31 Yeah. Okay, so let’s talk about your pseudo TDD workflow then.

00:05:36 Yeah. So what I’ve hit on that, I don’t know that it brings me too much instrumental value, but it at least has some aesthetic value to me, which is what you can do is if you find that you are writing a bunch of code. Right. And you’ve got test and code altogether, you don’t actually have to commit all of it. Everything in your workspace, whether you’re using Mercurial or Git or some other version control system. Usually there’s some functionality for either rewriting your history or making partial commits. So what I’ve been tending to do is maybe I’ll write some test and code code. And what I’ll do is first I’ll check in the tests, and then I’ll check in the code. Right. So I can roll back my version control to when only the tests were in place and see that they were failing in the same way that you would see that something is failing if you were doing TDD. And then I can go forward one step and run the test again and see that they’re not failing. And now you’ve got that whole I’m testing as I go along and I get the benefit of I know that my tests fail because I have tests that I’ve run and I’ve seen them fail without the code that fixes them.

00:07:01 One of the problems with this, though, is that if you are committing a bunch of broken tests directly into your repository, something like get Bisect, or just going through your history and running the test to see when something started failing or anything like that, it’s going to be real messy because every other commit the tests are broken when ideally your CI would pass on every single commit.

00:07:31 So this is where Expel comes in. What you can do is if you have, say, a test that works along with the code that makes it work, you can add an Xfail decorator to it to make sure that you can add an Excel decorator and then commit the XFL decorator and the test, and then remove the X failed decorator and commit the code that fixes it. So this is your sort of pseudo TDD. So that now the passing condition. As in my previous article, I suggested you’re using strict mode where an X pass is considered a failure. Now what happens is you’re checking in tests that are broken, but you’re saying that they’re definitely broken and that it should be a failure of the test suite if they’re not broken. And now the passing condition for your tests is exactly what you want it to be. Right? Because if the test is broken on the commit where you were saying, oh, here’s a broken test that needs to be fixed, then the test suite passes. And then in the next stage you remove the Ex fail and you remove and you add the code. So now your test suite should be passing again. And so now if you wanted to go through and make sure that your CI passes on every single commit, it should work. And you still have the benefits of TDD without actually doing TDD. Or you have those two benefits.

00:09:03 Yeah, the idea rent you’re not actually doing whatever the strict TDD that people talk about. But the spirit of TDD is that you’re thinking about testing while you’re thinking about coding and you’re still doing that, right?

00:09:24 Yeah.

00:09:27 It may be that you have written the code without adding tests for it, but you’re not committing the code without adding tests for it before the code ever gets committed. You will realize because that’s one of the other benefits they pitch for TDD. Right. Which is that they say, oh, well, you’re going to write a bunch of untestable code, and then if you just hand it off to someone else to test it or you try and test it later, you’re going to find like, oh, you’ve shipped it and a bunch of people rely on this thing, but you actually wanted to expose Features X, Y, and Z to make it more testable or you wanted to refactor it to make it more testable. Whatever it is, you’re writing untestable code if you’re not writing tests as you write the code.

00:10:09 And in this case, you still have to be in that situation where you can’t commit your code until you’ve written tests. Right. Because you have to commit to tests first. You have to rewrite the history. So before anything gets merged in, you have tests there that are in place that were failing and are no longer failing at the end.

00:10:30 I think you get a lot of the same benefits.

00:10:32 Yeah. And you get the benefit that every commit has a green test suite.

00:10:38 So like you said, you can use Bisect still to find out a new test and do Bisect or something and try to find things failures.

00:10:46 Yeah.

00:10:51 The big issue that I would say with this idea that every commit should be green is that with GitHub and GitLab and a lot of systems, what ends up happening is that there’s just not a great mechanism for running your CI on every single commit. A lot of people don’t want that because it would just be too time consuming. And if you’re an open source library, your contributors are not necessarily going to want to learn how to do sort of advanced get stuff like rewrite their history. So if they send me a PR that’s got six commits in it and it’s real nicely factored and everything, and then I say, oh, in commit three, the test suite doesn’t pass, or you need to make sure you run black on every one of these commits.

00:11:39 They may find that they’re handling all kinds of merge conflicts and things like that. And the way that you review these things in big chunks and people would have to rewrite their history makes it difficult to automatically enforce this. And if you’re not automatically enforcing it, someone’s commits are going to break your test rate. Right. They’re going to have something weird that breaks the CI.

00:12:07 I think this is one of the reasons, actually, that people use squash merges.

00:12:12 Because that’s what I was going to say on the side branch or the development branch.

00:12:19 There can be a whole bunch of broken stuff.

00:12:22 And I encourage that. I don’t want people, especially maybe not in an open source environment, but in an open source environment, I commit every day, whether it’s broken or not, because I don’t want the code just sitting on my computer.

00:12:40 Yeah. Well, I mean, I have really found that I like my commits and my code history to sort of tell a story. And that story is not here’s. The reentering path that I took to get here because who really cares about that? You don’t care that I made a bunch of off by one errors or anything like that. What really matters is like, okay, each commit makes an atomic unit of change. And then later if someone say Bisecting right they’ll be able to look at all these like hundreds of comments and then they’ll decide, oh, here’s some invariant that says is failing right now. But that was passing in the past and I want to know what broke it, right. And if you run like get bisect with a script, it will just automatically do this bisect for you and it’ll bring you down to one commit. And that’s the commit that broke it. Right. And you kind of don’t want that commit to be some random arbitrary collection of broken stuff. And you also don’t want that commit to be some huge mega commit with 1000 different changes and they’re all unrelated, but they just happen to be sort of correlated.

00:13:56 In your pseudo TDD model where you’re checking in an X failed test and then checking in the test without the X fail with the passing code, I assume you’re not squashing those. You’re keeping all of those when you merge.

00:14:12 That yes, my own personal because I’m sort of fastidious about my get history. I sort of trust myself to rewrite history and break up my commits into small pieces. And usually I’ll try and run the tests on every commit, but even then it’s not usually worth it. Enough for me to set up the CI to run on every commit just for myself.

00:14:38 But yeah, I won’t be squashing. But I also am just sort of trying to make it so that CI would pass on every branch on every commit. Not like enforcing that, right?

00:14:52 If you merge, somebody’s pull request.

00:14:56 Do you squash those?

00:14:59 It really depends on the person and on what they did for a long time. What I would do is I would just straight up refactor people’s commits. Like someone would send me some messy thing and then I would just go through and break up big commits and squash together just the ones that were intermediate commits and create a history that I actually liked. But that was just way time consuming.

00:15:24 It gets to be the point where I can’t be on my phone, see something that’s obviously a good change, and then merge it if I have to do all that stuff. So the ideal situation would be you have some sort of CI that enforces whatever it is you care about, whether it’s a clean commit history or just the final thing is fine.

00:15:49 And then you take a look, make sure that the PR looks good, and then you either squash merge or rebate merge, or just regular merge commit as appropriate.

00:15:59 I know in Cyclone they force everyone to do squash merges because I think it was just taking up too much time to ask people to have a clean history.

00:16:09 And in other projects I’ve seen, they say we really care about clean histories, so we’re going to ask you to refactor. Those ones don’t tend to accumulate quite as many contributors, but they do have nice clean histories.

00:16:26 So this kind of thing works better for small projects or projects where everyone’s kind of on board.

00:16:33 It also, I think will work better if you are using a patch management tool. That is actually where reviews are based on commits, and commits are assumed to be atomic units of change. So internally at Google we have a system that we have something that layers mercurial on top of it. And if I have like five commits in my branch and I push them for review, those commits get reviewed individually. So I’ll refactor it so that all the changes I care about are in individual commits, and then someone will review each one of those commits independently. It doesn’t even have to be the same person there’s. Garrett. I think it’s maybe an open source version of that, or it definitely uses a similar principle. And there are ways to like review patch sets as well. I think. Jenkins. No, not Jenkins. Fabricator has a similar thing where each commit can be reviewed individually and then you have like a patch set so you can review everything.

00:17:43 So those things where the actual unit of review is the same as a commit, or is the same as the unit of like an atomic change, those can be a much better fit for this kind of workflow.

00:17:57 I haven’t tried really the way internally at Google that XP work. Usually we don’t use Pi test for various reasons that I’m sure legacy reasons.

00:18:13 And so if I’m not using xVALE, the Marker, the CPython expected failure method doesn’t really have the same level of functionality and I’m not quite as familiar with it. So I don’t tend to bother breaking up my commits into two pieces where I have tests first and then the other thing.

00:18:36 But for some of my personal projects I will definitely do this workflow. And if someone is interested in this workflow, there are definitely ways to set it up. Even if you’re on GitHub or GitLab, you could fairly easily set up sort of a merge bot. I say fairly easily I haven’t done it, but it’s not terribly difficult, I imagine to set up a bot that does merges under certain conditions and then you sort of say alright, once everything is reviewed and signed off on, then fire off this check that checks every single commit. And if it fails, you don’t do the automatic merge and you wait for someone to handle it.

00:19:17 And if it doesn’t fail, you can merge it and maybe someone can at the bot to say like hey, check all my commits before I go for review, in which case they do their history rewriting. I don’t know. This is the kind of thing where it’s a very satisfying and aesthetically satisfying approach to developing your code, but I recognize that for a lot of people it’s not going to be practical.

00:19:43 Well, okay, so I’m going to tell you I don’t do this, but I do something similar in that at least it’s in a similar vein, the switch between source code and test code. Now, if you’re writing if it’s pure Python stuff, this totally is not so much of a big mind shift.

00:20:12 But if you’re using pytest.

00:20:14 For instance, to test something else that’s not written in Python, it’s a bit of a jump to go back and forth between languages.

00:20:23 It’s enough of a context switch that I am. I find myself frequently brainstorming test cases of things that I want to verify that I want to make sure that work. So I write test code. I don’t write one, I write several because I’m in that zone. I’m just writing some test code. Now all this stuff hasn’t been implemented yet, so that’s failing those makes sense because I know they’re not going to pass because they’re not there yet.

00:20:55 I usually don’t do the reverse.

00:20:56 The reverse of expect that I’ll need something in the future because I’ve got like the old I’ve been bit too many times of like guessing what functionality is needed in the source code and just putting it in ahead of time.

00:21:13 I don’t really do that, but I do think ahead and write test cases for things that aren’t implemented yet. I do that a lot.

00:21:21 So in those cases, Xfail fits really well with a little message of not implemented yet.

00:21:29 And then the case that you said as well, where I’ve got both working code and working tests, but I get down a design path where I realize I really have to pull out a whole bunch of code and start over.

00:21:44 The test code is great to help me along that route because I know all those test cases are still valid. I still want those things to work at some point if I want to commit that stuff because it’s still better than what I had before.

00:22:02 I think it is a good idea to go through and expel those the tests are still failing and say yes, it’s in the middle of a rewrite or something.

00:22:11 Anyway, it’s not really your pseudo TDD and it will be a really ugly commit history, but it is something I do to take advantage of Xfail and not throw away all the effort writing those tests in the first place.

00:22:27 Yeah, well, I mean, I will say I actually think that the part that’s impractical here, and I think the part that is possibly novel is that I’m adding exfoliating tests specifically so that I can commit working code and test the combination of working code and test separately so that you have a separation between broken tests and working code.

00:22:58 Right.

00:22:58 So there was never a time when I had a broken test that I could have committed.

00:23:04 I just rewrote the history to make it look like that was what happened because that’s closer to TDD. But what you’re describing is, I think just the vanilla use case for X sale. Right which I am also strongly in favor of. And in fact, I’m much more in favor of that than I am of what I wrote about in the pseudo TDD workflow. Right. Because in your case, you have tests. Right. And maybe the way you wrote the tests was in such a way that you wrote more tests than you had written code for. But I also was advocating that someone who shows up to your repository and has a bug should even just be committing X failing tests. Right. Regardless of whether there’s going to be a fix on the horizon or. Well, regardless of whether there is a fix that has already been started. Right. As long as someone says we are willing to implement this and we’re planning to implement this, then yeah, go ahead, write the test. And I actually don’t think that’s a messy commit history either. Right. It just means that the tests were written separate from the code, and I don’t know that they need to be written together.

00:24:18 This is really more like when you write them together and you write them sort of out of order according to you can reorder them, order them in your history.

00:24:27 I wanted to ask about a couple of things that I hadn’t seen before.

00:24:31 Git, add. P.

00:24:34 What is that.

00:24:39 With Git model? You have this two stage commit. So usually what everyone gets hit by at the beginning is they do like get commit, and then it doesn’t add anything to the commit history because they hadn’t staged any changes.

00:24:57 I think Mercurio doesn’t have that, but Mercurial does have an equivalent of this, which is like an Hgcommiti, which does partial commits.

00:25:09 But with Git, I think most people are probably familiar with staging one file but not another file.

00:25:18 P is just doing partial staging. So you do get add. P, and then it’ll go through all the changes in little chunks and it’ll say do you want to stage this? And then you say yes or no? And then they’ll say do you want to stage this? And you say yes or no. So what you can do is if you have one file that has three totally separate changes in it, right. You added three different functions and you’re like, oh, I forgot to commit anything.

00:25:42 I want to commit these three separate things that are unrelated changes from each other in three separate commits and make it look like I had good commit discipline.

00:25:54 So you use get. Add. P. You add the section that you want and then you do a commit and then you do get add. P. And then you add the section you want and you repeat that until you have three separate commits.

00:26:06 You can also sort of make some changes and then make fix up commits. So if you have just made a whole random collection of say, you fixed a bug in a commitment that should have been three commits ago and then you also added a feature because you add the feature first and then the feature reveals a bug that is still in your development branch. Right. So you actually want the bug fixed to go back where you added the feature in a single commit. So what you’ll do is you do get add P, you select the feature branch and then the thing that needs to go back into your history and then you can use Getcommit fix up and then the committee to add it to and then it’ll just commit a thing that’s like fix this up. And then before you push anything you just do a rebase with I think it’s called auto squash and it will take all the fix ups and reorder them and then squash them together with no changes in the commit message.

00:27:10 This some advanced get food, man.

00:27:13 Yeah.

00:27:14 Mercurial also has something called HG Absorb, which is this very magical thing where it will do that for you. So if you have some development branch and you make changes, so you do some development branch, you have like three or four different changes and then you run your Linter at the end, some other changes come up and you just make a whole bunch of changes to the final version. You now have stuff that needs to be distributed across five or six different commits, right? If you do HG absorb it will detect automatically which changes are just like fixes to stuff that hasn’t been pushed to main yet and it will automatically just add them to those commits instead of creating new commits.

00:28:04 Interesting.

00:28:05 And then there’s a plugin for Git called Git absorbed which does a very similar thing where you stage a bunch of you just stage everything and then do get absorbed and then it will find the main branch and figure out where the commits have to go and then it will commit each section as a fix up.

00:28:25 Interesting.

00:28:26 The only problem with that is that Mercurial does a really good job of batching these things together and just like squishing them right into the history. Whereas get absorbed tends to make these very small changes. So say you had some function that’s like 25 lines long and then you ran black at the end, right? And now you want to squash all the black changes into the original commit.

00:28:52 So now you’ve got like 20 different lines that changed. Getabsorbed tends to make 20 different commits and have them all be a fix up to that original commit, whereas HG absorb will just do the squash for you right away. Okay.

00:29:07 I’ve been more and more embracing black with a couple of exceptions because the line link. I’m glad that’s an option at least because there’s times where I want a different line link.

00:29:22 But anyway, I’ve never used some of these fancy get things, but what I do use is pie chart. And one of the things that it does when you commit stuff.

00:29:36 When I go do a commit, I don’t actually have to add anything because if it’s already there, Pytram will just show me all of the stuff that changed and then I can Uncheck the files that I don’t want to check in. And then for each individual file you can Uncheck sections. So I can just check in the section of the file I want. I think it does all these fancy things behind the scenes that I don’t have to care about, but it is handy to be able to.

00:30:08 Now this is a completely different topic, but what I find I’m doing sometimes is I’m working on some code and then somebody wants me to code review something and I’ll forget to check in my stuff or whatever, and I go check out check out another branch. And now I’ve got my changes plus their changes and it’s a mess.

00:30:30 You just don’t want to commit at that point because you’ve got a mess, but you can still move forward with tools like this anyway. Cool.

00:30:44 Using pseudo TDD workflow to have a clean history.

00:30:48 It’s a really kind of a cool thing, especially for somebody that wants to.

00:30:54 Like you said, it does tell a story.

00:30:55 So you can look through the commits and go, okay, I added this test, then I fixed it, then I added this other test and I fixed it. Even if that’s not really the workflow you went through, it tells a good it would be a really great way to review code.

00:31:11 I think it would be cool.

00:31:14 But any reaction to this from anybody else that’s run into you doing this?

00:31:21 Not that I’ve seen a lot of times when my code gets reviewed, it’s either internally at Google, where like I mentioned, I don’t tend to use this kind of workflow anyway because the facility for using XFL is not super easy or it’s on some GitHub or GitLab type thing, in which case people tend to review the whole pull request and not look at the individual commits. So because the gift from main to the head of your development branch is not going to show any of this, people don’t even notice, they wouldn’t even see it. Yeah, I haven’t seen any reaction and I don’t have a comment section on my blog.

00:32:10 I think that’s probably for the best, but I don’t know how many people have really saw this and took it to heart. I don’t know. I really hope that someone is not going to use it to berate some of their team members into saying like, oh, Paul says we should do this, so we should definitely have curated all of our get histories.

00:32:32 I don’t know, this is some persnickety stuff here.

00:32:37 I don’t know that this would be the right choice for a team that’s not made up of entirely persnickety people.

00:32:45 Right?

00:32:46 So one of the things something similar to this is a good idea for huge commits, though. Those are reviews that I hate to do. There’s like 20 files changed and each file has 100 lines difference or something. These are hard reviews to do. It’s too much. So breaking it up, even if you have to rewrite history a little bit to break it up to say, well, let’s do the review in like five chunks. First I attacked this part of the system and then the other.

00:33:17 I think that sort of stuff might help get through a review because I personally don’t believe these huge commits. It’s really hard to.

00:33:28 It’s really hard to do those reviews. I think definitely.

00:33:32 I mean, I definitely recommend very small PR if you can, especially when you have changes that end up being orthogonal. So you have change one and change two doesn’t build on change one, but you build change two on change one anyway, just because it happens to be in your history, don’t do that. Just rebase them both against main and then submit two separate PRS. Yeah, but yeah, sometimes what happens is you have some feature and then you build another feature on top of that feature before the first one gets reviewed. And then you build a third feature on top of that feature before the first two get reviewed. And the UI and GitHub at least is not super great for this. So what you’ll do is you can break it up into three or four PRS.

00:34:20 I’ve had PRS where I have 50 commits in them, right. But people still end up reviewing just the change from main to Tip.

00:34:30 So what ends up happening is you can maybe make three PR, but you have to say, okay, review them in this order, and then PR two and PR three. While PR one is getting reviewed, they are showing up all the changes from PR one and then you have to keep rebasing them. It’s not great. I think some of these other review systems, like maybe Garrett and I think Fabricator, because their atomic unit of review is the commit, it makes it a lot easier for you to do something like, all right, I’m going to break up this huge 500 line change, 1000 line change 2000 line change into a bunch of small atomic commits, each one of them adding one unit of functionality with one test or some small set of tests for it, and then allow people to review them individually and also as a whole because it’s saying, all right, here’s the next patch in this series and I’m going to want reviews on each of them and I’ll go back and make changes to the old commits.

00:35:38 And then once the whole chain has been reviewed or the whole set has been reviewed, then we submit the whole thing all at once, because sometimes also these intermediate commits, they leave you in a better state than you were in before. But the feature isn’t complete, so you may be adding just a bunch of private methods and no part of the public API. So is that something you would want to release? Probably not. But it is probably something you wouldn’t mind merging, right? If you said, okay, we can merge this, you built out the private part of the API, someone else will build the public part of the API. Like you could merge it. So at that point, that’s a boundary that I would say, okay, we should have a committee that is merging. But because branches are cheap, we can maintain this feature branch until the feature is done, as long as the feature branch is kind of still short lived.

00:36:30 So you can keep adding patches to that set and then merge it into the main repository only when the feature is completely done.

00:36:40 I work on libraries, though, right? So these sorts of features tend to be 510 commits. Not one of these situations where someone’s got to build a UI, they have a front end team, and then they have a back end team and they’re doing huge refactorings and the release cycle is on the order of two years or something.

00:37:03 I can’t really give any advice about that kind of workflow.

00:37:07 Good. Gosh, do those things exist anymore, I wonder?

00:37:12 Anyway, so this brings a couple of things up that I want to highlight. One, the pseudo TD workflow that you described in Rewriting History and Stuff is enabled because of X fail in pytest. Or that’s one of the reasons I would love to hear from other people of other workflows that they are able to do because XFL exists, or because some other feature pipe test exists. That would be cool to hear. I’d also like to hear from people that have a cool way to help with large reviews. If there’s a workflow that helps with that or tools that help with that, please reach out to me to contact form and then so we’re going to have a link to your article about Sutt workflow. And then you also have another article about code coverage that we’re not going to talk about today, but I’m going to read it and we’re going to have you back on to talk about that. Sound like a plan?

00:38:10 It sounds like a great plan.

00:38:11 Awesome. Well, thanks again, Paul, for taking the time today. And I think at least even if people don’t do it, I would recommend reading this and just think about the use of X fail in your life and keeping your commit history clean. So I’m not as disciplined as this, but I do like to keep my commit history clean, even though in the projects I work on, we do squash things regularly because there’s lots of mess in the middle. But anyway, thanks a lot, Paul, and we’ll talk to you next time.

00:38:47 Yeah, it’s always great to be on the show. Thanks, Brian.

00:38:56 Thanks, Paul. Lots to think about in here. Thank you, PyCharm, for sponsoring this episode visit testandcode.com PyCharm for a four month free trial of PyCharm pro save time. Use PyCharm. Thank you, Patreon supporters. Join them at testandcode.com. Support. That’s all for now. Now. Go out and test something.