Cristian Medina wrote an article recently called “Test Engineering Anti-Patterns: Destroy Your Customer Satisfaction and Crater Your Quality By Using These 9 Easy Organizational Practices” In this interview, we discuss each point, and the corollary of what you really should do. At least, our perspectives.


Transcript for episode 92 of the Test & Code Podcast

This transcript starts as an auto generated transcript.
PRs welcome if you want to help fix any errors.


00:00:00 I met Christian Medina a few years ago at PyCon. He’s the author of a great blog called Dry Excite Pass and is fighting the good fight to document and teach how to do things right. He wrote an article recently called Test Engineering Anti Patterns Destroy Your Customer Satisfaction and Greater Your Quality by using these nine easy organizational practices.

00:00:21 Of course, it’s sarcastic and aims to highlight the many problems with organizational practices that reduce software quality. I asked Christian to come on the show to discuss it. The article doesn’t go out of character and it only promotes the anti patterns. However, in this interview we discuss each point and the corollary of what you really should do. At least our perspectives. Now, Christian and I don’t completely see eye to eye on all of this. Mostly, but not completely. I really love talking with people who are passionate about making great software, creating great processes and great teams, and improving things when they’re not so great. Even if we disagree, the motivation to improve the status quo unites us more than our differences. Okay, I think I actually made it sound like we diverge more than we really did, and it’s just really a small divergence. Just have a listen and get hold of me if you’d like to have your shot to give your perspective. Thank you to Patreon supporters for your continued support of the show. Join them at testandcode.com Support thank you to the wonderful people in the Slack channel at testandcode.com, Slack for tirelessly helping each other out with software testing and pytest questions. I am truly humbled by the generosity of the regulars in this channel. Thank you to Azure Pipelines for sponsoring this episode. Many organizations and open source projects are using Azure Pipelines already. Get started for free at Azure. Compipelines.

00:02:05 Welcome to Test and Code, a podcast about software development, software testing and Python today on Test and Code. Am thrilled to have Christian Medina on Hi, Brian.

00:02:20 How’s it going? Thanks for having me on. I’m Chris Medina. I’ve been working some form of professional engineering job for almost 20 years now and been using Python for a good portion of that. And a lot of that time in my career has been doing something with a test organization, whether test design or best practices or test tooling. And lately, I have been putting a lot of effort into my own website, trying to cover development, best practices and stuff like that. Yeah.

00:02:56 You’ve been putting out some really cool articles on there.

00:02:58 Yeah.

00:02:59 Trying to keep the schedule and with some artwork.

00:03:03 Yes, sir. I hired an illustrator through Upwork. He’s working out very well. We kind of picked this Tinkerer theme.

00:03:12 The main idea of trying to accept passes, don’t be afraid to try stuff. And here’s all the different things you can try.

00:03:19 I figured this Tinker thing went with it pretty well, and I’m pretty happy with what he’s been doing.

00:03:24 Yeah, it’s good. Cool. One of the reasons why I wanted to have you on was because you wrote this amusingly titled article called Test Engineering Anti Patterns. Destroy Your Customer Satisfaction and Crater Your Quality by using these nine easy organizational practices.

00:03:43 Yes, sir.

00:03:44 Clearly you are being facetious and not really advocating for this. Right.

00:03:50 Right. So what happened is I was listening to another podcast called Developer Tea and they had this episode about how to be a bad manager. And in it the host talked about how it’s interesting to look at things from a different perspective sometimes.

00:04:06 And I like sarcasm, so I had loads of fun trying to do that and broke this post as it applied to test organizations.

00:04:16 Okay.

00:04:18 It’s amusing. And I’ll have some questions.

00:04:21 Are you okay with if we just walk through these?

00:04:23 Yeah, sure.

00:04:24 Okay, so the first one is make the test teams solely responsible for quality, right? Yeah.

00:04:34 What do you mean?

00:04:36 It’s awesome in large organization. So a lot of this stuff applies to larger organizations. Not like five to ten people teams, but something larger where you have a proper development team, test team marketing and all this stuff. So it’s always interesting when a lot of times in big companies they think of test. Well, you guys own quality. So that means if there’s any problem, it’s your fault. Right. But yet the test organization might or might not have a lot of say in what goes into the product, what the schedule is, or how that product ships.

00:05:16 Right. So here, essentially, I’m outlining pretty much that you make the test team responsible, but just make sure that they don’t get any say in what features to put in the product or how to define those features and make sure the sales guys are the ones that define your target place. Make sure the marketing guys define the features. And this way you spread the responsibilities around. Meaning you’ll need more communication between all the different teams to make sure that everything works out correctly. And that’s one way of guaranteeing problems, which is the objective of the article. Right.

00:05:54 Okay, so the counter is you want everybody to be responsible for quality, right?

00:06:01 Correct.

00:06:02 It should be shared responsibility and you should have less of this arguing. Like, I have some examples of the stuff that people start arguing in meetings. Right.

00:06:13 You want it to be more about? Well, this problem happened. Let’s figure out why it happened and let’s fix it and move on and just avoid the blame game and all that stuff because it just leads to politics and tribalism.

00:06:29 You’ll find organizations doing things to try to protect themselves from that argument versus doing what’s best for the product or the company.

00:06:37 Yeah. This is not unusual to have hear a comment like when there’s a customer issue or an issue that some other team found hearing the question of, well, I thought you tested it. Why didn’t you test that case? There’s a million test cases you can test you can’t test everything this episode of Test and Code sponsored by Azure Pipelines Azure Pipelines is a continuous integration continuous delivery service that supports Python and any other language on Windows, Mac and Linux and lets you run automatic builds and tests for your code. It is fully integrated with GitHub and lets you define your continuous integration and delivery pipelines with a very simple YAML document. As your pipelines is free for individuals and small teams, if you are maintaining an open source project, you get unlimited build minutes and ten concurrent pipelines. Many organizations and open source projects are using Azure Pipelines already, including yours truly. Automate your builds and deployments with Pipelines, so you spend less time with the nuts and bolts and more time being creative. Get started for free at azure.com pipelines. That’s a Z Ure.com pipelines.

00:07:49 So number two actually is a little bit controversial. I think you’re getting right into it right away. Number two, require all tests be automated before releasing, right?

00:08:02 And to me the main thing is that distracts from the actual feature you’re trying to release and the workflow, especially the workflow for your user when you require it and you require it before you release. That means that now everybody is in the critical path to get a feature out. Whereas before say you’re doing iterative development, you can maybe put the feature out and test it out a little bit, even with some portion of your user base, and then iterate on it and make it better as it goes, right? But if you’re still doing that and you just spent a bunch of time and man hours building a bunch of automated tests for that one feature and all of a sudden you realize you got to iterate on it. Well now you got to iterate on all those tests, right? And when you’re talking customer level test, a lot of that stuff gets pretty complicated into the types of workflows and different iterations of each one of the workflows and it’s just very time consuming.

00:09:05 It’s a great way if you do this. It’s a great way to think a bunch of time and trying to develop this stuff, which you’ll likely have to change in the next couple of iterations. So might as well, except doing the testing. Just kind of do it manually first and do some exploratory stuff. Or I’ve been in orgs where they call it artistic testing, where you’re just fiddling around with the UI and then later in the future iterations bring in full automation on the test, as you know what the better workflows are going to be to automate.

00:09:37 Okay, so as the feature solidifies and it starts becoming one of the legacy features, then you still would want probably to build into automation to make sure it doesn’t break. Okay, if we had an idea like we had a continuous integration workflow that tried to validate all the features, if some of the features aren’t tested, they’re not going to get caught. So you do need to have some manual stick time if you’re not automating everything. Right? Right.

00:10:08 You still need to do that.

00:10:11 And one way to do that would be in your continuous integration system to have a review requirement or a status check from and I’ve done both where you need an extra check from the person who’s doing the manual stuff. And that check might just be they type something into a CLI that goes and updates your CI to say, yes, I approve this to go on.

00:10:36 Okay. The next one is number three. Require 100% code coverage. This is also dependent on what size of your code base it is.

00:10:45 That’s true. And also controversial. Right. Because we want code coverage. But what code coverage measures are all the different paths in the execution of a function getting exercised when you run your tests?

00:10:57 The problem is that when you start measuring for that and making it a big deal to measure for that, you can wind up in people writing tests just to call the functions just to improve the numbers instead of making meaningful tests that actually do something useful.

00:11:11 Yeah.

00:11:12 It’s like playing a video game where everybody’s trying to get the maximum damage output and they start padding their numbers. Killing something doesn’t really matter just to make it look better.

00:11:22 Yeah. I’ve had like a love hate relationship with good coverage numbers. I tend to use them more near the end when I think I have complete behavior coverage at that point, taking a look to see if there’s maybe some paths to the system that I didn’t think about, or maybe even some code that can’t be hit because it’s just actually dead code and maybe we could take it out.

00:11:45 The main point is when you’re looking at whether you’re ready to ship or what’s the process or how things are doing, just looking at the number doesn’t really convey a lot of info. If you’re saying last time we were at 20% coverage and this time we’re at 30%, and that has some usefulness to it.

00:12:04 Relative coverage numbers are more interesting than just a number, right?

00:12:09 Yeah, exactly.

00:12:11 Next up is number four. Isolate the test organization from development. This is sort of related to number one.

00:12:18 True. And the more isolation, the better. I encourage physical isolation as well, so that people have to physically get up to talk to each other and walk all the way to the other side of the building if they need to preferably a different building.

00:12:36 The real advice is people writing the tests could either be the developers or be sitting right next to them.

00:12:42 Right. And the advantages of just people intermingling. Just because you have lunch with somebody, just because you’re sitting next to you, like I’m going for lunch completely drastically changes your relationship with that person and its team and his team or their organization. Right.

00:13:02 And having that interrelation makes it so that you give people heads up about a lot of things going on. Like the developer can be like, oh, I’m tweaking this here. You should check your test that is related to this versus just, oh, here’s a new release and you don’t say anything and all of a sudden all your tests are broken.

00:13:20 Yeah. I actually love some of your points about what collaboration might lead to.

00:13:28 And you have to be careful because if you collaborate too much, the test engineers might actually know about the product.

00:13:35 Yes. We don’t want that.

00:13:39 We don’t want the test folks to know much about how the product works or how it’s used, because otherwise, if they do, then you’ll get better quality.

00:13:49 Yeah. In your experience, have you had more time in kind of a mixed environment with people writing tests sitting right next to the developers, or are you more experienced with the separation?

00:14:03 Most of my career was with the separation. Over the past several years, I was working on projects that had them all intermixed, and that was fantastic.

00:14:16 You get to the point as a developer, you can even hook things in such a way so that you even know when the tests are getting executed and you get notified about stuff and you’ll fix it before the tester can even come over and talk to you about it.

00:14:32 That’s always fun, too.

00:14:35 I had this problem. I was like, yeah, try it again. What do you mean? I know you hit this and that, and I got an event.

00:14:41 Just try it again. Oh, it’s working now. Okay.

00:14:44 I don’t remember if you addressed this in the article or not, but just the notion there are different people writing tests that are developing. Do you have any opinions on that?

00:14:53 So I prefer a mix on that. Personally, I would write my code and ask someone else to write the test for it, and I would give them more or less what to write, especially if you wrote and, you know, the way that something supposed to work, it’s hard for your brain to just kind of break out of that box and say, oh, well, I’ll try to do it the other way. So you always want to have some input from what’s called a testing expert into how your tests are designed. But that doesn’t mean that the QA has to be the only one writing all the tests or that the developer has to be the only one writing all the tests. It’s also good to have the developer write the test so they know the complexities of having to put that test together, because a lot of times they don’t realize how complicated it sometimes is, especially with large systems for the test just to even run the code to be able to do a test. Right.

00:15:47 Yeah. Okay. I do see a different mindset of trying to break things that can be a helpful mindset. And it is kind of nice to have developers think about that, too. We also don’t want to spend too much time chasing down a rabbit hole. That really is never going to happen.

00:16:03 Right. So that’s why you want that communication and you want people to think both ways. And if you have one that primarily thinks one way and one that primarily thinks the other way, and they talk often like they’re jointly responsible for putting something out. Right. You’ll even get things like designed for testability. Right. Like if I write it this way, then it will be easier to test it. If I write my test this way, then I’ll be touching these pieces over here. But is that ever going to be something that’s even possible? Right? Yeah.

00:16:30 I’ve seen cases where just the question coming up early of how do we verify that is working? The answer sometimes requires an extra helper function or debug function being added so that we can query different parts of the system that aren’t necessarily useful for a user but are necessary to validate that something’s going on correctly.

00:16:49 Right? Yeah.

00:16:50 Okay, let’s move on.

00:16:52 Your fifth one is measure the success of the process, not the product. This is fine.

00:16:58 Yeah, this is great.

00:17:01 A lot of times people spend a lot of time measuring things when you’re trying to do specially complex stuff. Right. But it’s true. It’s been true everywhere that I have worked and that you get what you measure. So if you measure the number of customer complaints or the quality from an external usage point of view, then you’ll get better quality and less customer complaints. But if you measure the internal parts of how you produce or release code, there’s a few things you can get caught up with that then just kind of you lose sight of the whole purpose of what you’re actually doing, which is to release some product or service that someone else is using.

00:17:51 Not your internal organization encouraging test engineers to produce lots of defects. And if they haven’t, maybe they’re doing something wrong, but they could be just working closely with the development team and fixing things before it even gets into the issue process.

00:18:06 Exactly. That actually happens to me in some past experiences where question was asked, what is Qi doing? They’re not writing agent like, you mean they’re testing stuff. I’m fixing all the problems, but there’s no issues to document the problem. I was like, okay, we can spend a bunch of time doing that if you want to, but then that distracts from everything else.

00:18:28 Yeah. Even just a simple like typing something up to have me close it the next day is a waste of time if it’s going to get fixed anyway.

00:18:37 Right.

00:18:38 But those are easy to measure. Actually measuring that coordination is going smoothly and that the product is working that’s hard to measure.

00:18:45 It’s hard. Therefore, we measure the easy stuff yeah.

00:18:47 And then we fix the wrong things.

00:18:49 Yeah. A couple of interesting things that I’ve seen measured in the past is your response times.

00:18:57 Somebody will go off and write some defect process. Right. And you need to it says, well, a defect when you write it must go from this open state to this assigned state to this working state to this whatever verify, close kind of thing. And at some point, some of those state changes require somebody says, oh, it shouldn’t be in the state for more than this much amount of time. Right. So then all that does is it makes people just move things across between States as often as possible, sometimes before meetings. Right. I know there’s a status meeting tomorrow about the state of this defect, and it’s in return state, which means I’m in QA and I need to do something. But let me reopen it real quick, saying I need something else from development so that it shows up and it’s in development court next go around. Right.

00:19:51 Just wasting time on a bunch of process stuff. Right. You can just be doing something useful.

00:19:56 Yeah.

00:19:59 Those are silly. And often the people coming up with the process for which state things are and what they mean are not the same people that have to use it. So sometimes these States are meaningless.

00:20:11 Yeah.

00:20:14 You wind up giving the process more importance than the actual product.

00:20:18 Yeah. One of the things that has happened to me before is having I just meet play with some code, and then I talked to somebody and say, oh, this doesn’t work. Right. And I get that if they’re in the middle of something and they can’t address it right away, that would be more helpful for the process. If I submit an issue, I get that. But if it’s something they can just fix right away, there are some people that won’t fix anything unless you file an issue. That’s true. And even if the process has no weight. But we’re working on quality software. Not a perfect process. Right.

00:21:00 Right.

00:21:02 Another favorite of mine is the pass rate. It’s like, what percent pass rate do we have? We’ll only exit or release our code when we hit 98% pass rate. By the way, almost every.org I’ve been in uses 98% for some reason.

00:21:16 Whatever.

00:21:19 Oh, I can get the 98% easy.

00:21:21 I was like, yeah, I can write 100 tests and pass 98 of them.

00:21:26 Yeah. And I could duplicate easy tests that I know we’re going to pass with slightly different input and call them new test cases. Or I’ve got a handful of failing cases, test cases. I can combine those into one test and it reduces my failure rate.

00:21:45 That’s correct.

00:21:47 That’s why whenever I go to status meetings and stuff like that, I ask about what are the qualifiers to all this stuff?

00:21:56 98% pass. But what are those two tests that fail?

00:22:00 One of those tests was installing my application.

00:22:05 Right.

00:22:08 Maybe we have a really big problem or the main thing that we’re trying to advertise or promote for this release, and that’s the thing that’s broken. That’s a problem.

00:22:17 Exactly.

00:22:18 Yeah.

00:22:19 So a better measure, I think, is just having the team more involved and with the entire process understanding all the features that we’re trying to push out and just a confidence level. How does everybody feel about it? Are you comfortable with pushing this out for customers? Do you think things are going to be okay?

00:22:40 I know that’s a sort of a mushy touchy Feely sort of measurement and you can’t quantize it very easily, but I think it’s more accurate. When I asked the team, Are you comfortable? Do you think it’s solid enough to go to customers?

00:22:54 Then I will hear things like, oh, there’s this one issue that really should be fixed, and it might even be like labeled a normal level or something medium severity. Like, why is this not critical? Well, it doesn’t really crash the box, but it’s just wrong and it’s annoying.

00:23:12 Well, then we need to fix it.

00:23:14 Yeah, I agree.

00:23:15 Anyway, okay, where are we at five?

00:23:18 Moving towards six.

00:23:19 Moving towards six. So six is require granular projections from engineers. What do you mean by that?

00:23:25 I mean, Brian, what day are you going to be done with this feature tomorrow? I want a date.

00:23:34 You can’t tell me October.

00:23:36 You can’t tell me the last week of October. You need to tell me October 29. By end of a day, I will be complete.

00:23:44 I’m going to Pat it by a month then.

00:23:48 But when you do that, Brian, I’m going to bring it up with the whole team and I’m going to say, hey, guys, do you really think that this amount of time that Brian has said he’s going to spend developing this feature should be 15 days, however long that was?

00:24:04 Yeah, right.

00:24:06 And then you’ll find a bunch of people start arguing and people just start arguing between themselves like, no, you can do this in so many days. You can do that in so many days.

00:24:18 These days I try. I like the Iterative model of development. It seems to work pretty well.

00:24:25 So to me, the granular stuff just kind of makes that almost impossible.

00:24:31 And just in general, if I’m running a function that’s going to print something out of the screen, you can tell you how long it’s going to take me to do that. But if I’m running some sort of complex system, which is the usual thing that happens in the real world, it’s really hard, especially when you’re communicating with third party libraries and you have dependency on hardware or deployment strategies and all that type of stuff, to know whether all that stuff is going to line up as planned. A lot of times, developers don’t even know how to solve a problem until they get into it or they do know how to do it, and it’s all based on a bunch of theoretical documentation, which could just be wrong.

00:25:19 The whole point of this is to try to help the developer release code iteratively don’t go nuts about specific dates in specific days and measuring how many hours were spent developing a specific feature. Right?

00:25:40 Yeah.

00:25:40 It all just leads to contention and BS having estimates referred to as estimates, not commitments. I like saying I estimate that I’ll be done in so many iterations, for example, which is a lot easier to do than to say I’ll be done by this day. In the end, if you’re doing an Iterative model and then you’re done midway through the iteration, you can’t release it until the iteration is complete anyway, so who cares, right?

00:26:06 Yeah.

00:26:07 And if you have a granular system and you missed that one iteration and you have like a two week thing, then, hey, next one you’ll be done.

00:26:14 Now, I remember when I was completely opposed to all sorts of like, estimating, and that was when I wasn’t a manager.

00:26:23 But now it is helpful being a manager. It is helpful to know kind of how big you think this task is. And I think number of iterations or something is good because somebody is saying it’s a big deal and it might take we’re not sure it’s going to take at least a month and it might take longer once we get into it. That’s completely valid. But if it’s things that’s like, oh, it’s a little hairy, but yeah, I can get it done in like half a day.

00:26:51 Right.

00:26:52 I think it’s also important to try to practice, even if you’re not asked to practice estimating stuff of like.

00:26:58 I agree, it’s a skill.

00:26:59 How long do I think this is going to take? And then measure myself? Even if you’re not telling anybody, getting better at guessing how long something is going to take.

00:27:06 Something I did recently was I can say, hey, I think I can add the team or whatever, because I was kind of in a teammate situation. I think the team can have the set of features done by this quarter, and then I will give you more details as we get closer.

00:27:27 Once you enter the quarter, you kind of have an idea if it will be done at the start of the quarter, midway through or at the end, right. And then you go from there on and say, oh, I need two or three more iterations to finish it up.

00:27:40 And depending on what kind of project you’re on, scheduling sometimes has to coordinate with other teams. So it’s a necessary evil. But anyway, okay, number seven is reward. Quick patching instead of solving this is good, right?

00:27:56 Yeah. So a lot of times when people are under the gun trying to get something out organizations, they tend to push that on to the engineers and be like, I need something I need now, so if you keep pushing that, you will get attached to the problem versus allowing somebody enough time to figure out the real root cause of the problem, which if you never find out what the problem is, and that leads to a whole bunch of other issues, which is all the scaffolding around code to try to make this work. And then it’s all just very brittle because you don’t really know what the underlying issue is. There’s something, but you don’t know what it is, but there’s no time for you to go figure that out. So as an organization, if you encourage the quick fixes and get it done now, kind of scenario, that’s also something that’s going to lower quality of your product.

00:28:49 Now that being said, I’ve seen it work. Okay. In situations where a quick patch is possible to ease the pain, if there’s a customer problem, sure. Fix it quickly and then go back and do the correct fix. As long as you allow time in the schedule to go back and find the root cause and make it more maintainable for the future.

00:29:12 Right. A couple of workarounds here and there just to kind of get you through current situation and then it’s to buy you the time to do the research. Great idea.

00:29:23 Yeah.

00:29:24 You got to be careful, though, because if there’s not a good enough communication through the entire management chain, you might not get approval for those fuel fixes.

00:29:35 What do you mean you already fixed it? Let’s move on and put more features in.

00:29:38 Exactly.

00:29:39 Yeah. Okay. Number eight is plan for today instead of tomorrow.

00:29:44 So the idea here is I’ve been in many planning meetings where people prefer to hear everything is awesome version of how things are working or how things might work. And sometimes you do actually have historical data that says when we touch this piece of code that has this interaction with these systems, that means we tend to have these types of problems which takes so long to debug and resolve. So therefore, your actual real thing for the average case based on the past two years of work is this much time. But if you do that, then you’re planning ahead for the actual problems you might encounter. And since our goal is to produce a crappy product, you want to not do that at all. And just plan based on everything is always going to work the first go and there’s not going to be any issues.

00:30:39 You can’t really do that.

00:30:41 It’s amazing how much that comes up. I’ve actually heard. No, you’re not allowed to plan for the average case. You have to plan for the best case several times in the past. Yeah.

00:30:53 Okay.

00:30:54 I remember being in a situation where we were working on the same team working on two different projects, and the instructions for estimating the project for all the tasks involved was in an ideal case. If you had nothing else to work on and no distractions. How long would this task take? So you’re estimating all those individually and do that for another project, and then having somebody take all those numbers and say, well, we can clearly do both of these tasks at the same time, and it’s the best case number, and let’s just schedule for that anyway.

00:31:30 And now you’re committed to it. And it’s like, why aren’t you done yet? It’s not like you have any emergency problems or any customer issues to fix while you’re writing your features and nobody needs training and nobody goes on vacation.

00:31:43 Another favorite is that people will plan something and they’ll have like a release in late December and everybody’s got a freaking Christmas vacation.

00:31:53 Yeah.

00:31:54 So now I feel ripped off, man. You told me they were having nine steps and the 9th is just the conclusion.

00:32:01 Well, my technician sales team told me that I had to ship the article.

00:32:07 Okay, so it wasn’t complete.

00:32:10 Well, that was the whole point, right?

00:32:13 We can come up with a bunch more stuff. I just had to cut it off somewhere.

00:32:18 It’s an amusing take on it. I had to do the wrong thing. I have to admit, if I consider working on somebody else’s open source project, I clone the repository and I run the tests. And if there’s not a coverage already, I admitted I judge open source projects on their code coverage number, mostly because I want to know if I’m adding new code, if I write enough test cases to make sure that I’m testing that. And I know that just hitting 100% coverage doesn’t guarantee that I’ve really tested everything, but it’s at least as an indicator.

00:32:58 Sure. Agree. In an open source, you don’t really have a lot of measure, so as long it’s different when you, as an engineer, read that number and understand the implications of it versus bringing it up in a management release meeting in some organization or something like that, where everybody’s just getting at the number.

00:33:21 And the communication mechanisms for distributed open source projects are completely different. So we have different guidelines anyway. Well, cool. Well, thanks a lot for writing this article and coming on the show and discussing it with me. Yeah. So it’s been fun.

00:33:36 No worries. Yeah, it’s been fun. If anybody is interested in anything else related to test or practicality of software development or lately, I’ve got this long series on continuous builds. Feel free to swing by Try Accept Pass on Twitter.

00:33:54 You try Accept Pass also.

00:33:55 Right at Try Accept Pass. That’s correct.

00:33:57 Yes.

00:33:58 Cool.

00:33:59 Well, keep up the good work and I’ll talk to you later.

00:34:03 Thank you to Christian for this great interview. Keep up the great writing and teaching. And if you’re looking for our ideas for your blog, I’m a huge fan of all things retro, scifi, so, Rockets, UFOs, aliens, Blasters, etc. Just a suggestion. Thank you to Patreon supporters for continuing to support the show. Join them by going to testandcode.com support and thank you to Azure pipelines for sponsoring this episode. Automate your builds and deployments with pipelines so you spend less time with the nuts and bolts and more time being creative. Get started for free at Azure. Compipelines that’s a Z Ure comPIPELINE.

00:34:41 That link is also in the show notes at testandcode.com 92. Oh my gosh, 92. We’re closing in on 100 so cool. That’s all for now. Now go out and test something.