Paul Ganssle, is a software developer at Google, core Python dev, and open source maintainer for many projects, has some thoughts about pytest’s xfail. He was an early skeptic of using xfail, and is now an proponent of the feature. In this episode, we talk about some open source workflows that are possible because of xfail.


Transcript for episode 171 of the Test & Code Podcast

This transcript starts as an auto generated transcript.
PRs welcome if you want to help fix any errors.


00:00:00 Paul Gansel is a software developer at Google, a core Python Dev and an open source maintainer for many projects, and he has some thoughts about pytest XFL. He was an early skeptic of using xVALE and now is a proponent of the feature. In this episode, we talk about some open source workflows that are possible because of X fail.

00:00:20 Thank you PyCharm for sponsoring this episode. Pycharm saves me time. Error highlighting lets me know as soon as I type a mistake. Code completion and parameter hint pop ups save me from having to look stuff up. The Git integration makes all of my Git workflows super easy, and then you get a whole bunch of bonus features like the database front end that are super cool and a super fast debugger. I wonder how PyCharm will save you time. Find out yourself at testandcode.com.

00:00:48 Pycharm welcome to Test and Code.

00:01:14 Paul Ganssle contacted me and said he wanted to talk about Xfail and I really like talking about pytest, so I’m up for it. So Paul, people don’t remember who you are. I’m pretty sure you’ve been on the podcast before, right?

00:01:28 Yeah, I think it was 1618 months ago, something like that. I was talking about subtest, I think.

00:01:35 Oh, right, yeah.

00:01:36 And that was my last blog post before like last month or something. It was up for like 18 months. I wrote a blog post about that and then just completely stopped blogging for a super long time.

00:01:48 So why did you feel the need to blog about Xfail?

00:01:52 I think actually because I sort of half promised you that I would talk about Xfail on a future episode of Test and Code, like a long time ago.

00:02:02 And then I felt really bad about not doing it. And I’ve noticed that you’ve been doing a couple of episodes about Xcel and I was like, okay, I should probably do this. It’ll be thematically appropriate with what you’ve done recently.

00:02:15 And just in general, I guess I’ve been feeling a bit more that like creative urge to write something down.

00:02:26 So do you use XVl regularly?

00:02:32 I try and make it a part of my development workflow.

00:02:36 Lately I’ve been doing a lot more work stuff and a lot less open source. And when I do open source, it’s often in C Python where it’s a lot harder to use XFL. But when I have the chance, like when I’m working on Date. Util or any of my own personal projects, something that uses Pi test.

00:02:57 I definitely like the to use XVl.

00:03:02 I think there’s another blog post coming out later about how I integrate it into my personal workflow.

00:03:08 But in general I have a policy where let me back up a second. Originally, when I first heard about Xcel, I was like, what is this? Why would I use it? It seems really stupid because it’s just this thing where you will put in a broken test and you’ll Mark it as X fail. But you won’t Mark it as skip, right? You’re saying this test is going to fail? And I was like, why is this different than skip? And also why would I want to check in broken tests, right.

00:03:43 People came to me and they offered, they’re like, oh, do you want me to add an X failing test for this issue that I raised? And I was like, well, why don’t we just add the test when we fix the issue?

00:03:55 What purpose is it serving?

00:03:58 But I was actually convinced by one of my fellow maintainers Brock Mendel that it actually makes sense to use Xval. So I guess I didn’t really understand that the difference between xVALE and skip is that xVALE actually runs the test and then it sort of checks whether or not the test failed and skip just makes it so you don’t run the test at all.

00:04:21 And the main reason why you would want these two things to be different is that when you’re running the test and checking that it fails, you can actually enforce that it fails so that you know that the test is. Well, you can know two things about it. One, you can know the test is working in a limited sense. Right. So you know that you haven’t fixed the issue and the test is failing. So when you fix the issue and get rid of the X fail, you do know that the test works because you didn’t change it. It used to fail and now it succeeds.

00:04:57 The other thing you can know is whether you fix the test by accident or even at all. Right. So generally by default, what happens is that if the test fails, it’s marked as X fail, and if it passes, it’s marked as X pass. I always run my test suites with strict on. So that basically says that if a test fails to fail, then that’s a failure. So the test read fails.

00:05:29 Yeah. So that’s confusing to some people.

00:05:33 And I’ve had to explain it to a lot of people, especially internally, and I explained it that you don’t have to be afraid of the test suite failing. Any failure is just a message to us that we need to pay attention to something. So in the case where we get an X pass and that fails the test suite, that’s a good thing because it tells us that maybe we can maybe this thing is fixed and we can take the Xfail out of the code.

00:06:02 Yeah. I mean, I think people don’t like the idea of X fail being strict because they’re like, well, if I fix a bug, I don’t want my CI to break because the library got better or the application got better.

00:06:21 But I think that is based on the idea that every time your test suite fails, that’s like a lot of work, right. You have to fix something in the configuration or Hunt down a bug or it adds all this latency or it’s just something else to do. But in the case of xVALE, what it’s really saying is the only thing you really have to do is remove the xVALE decorator and then it’ll start working again. So it’s not like it’s going to kill your velocity or anything, because you’re going to have to fix all these fiddly things with the test suite. I think we all have been in that sort of situation where you have a bunch of fragile tests and they keep breaking and you’re like, I’m just like spending all my time maintaining these tests. But with the case of X fail, it’s like I’ve already written regression tests for this bug. And if you accidentally fix the bug or even if you fix it on purpose, the test suite should start failing. And that’s like, oh, check it out. Here are the tests that passed, and then you just removed that exhaled decorator. That’s all you have to do. And your regression test is in place.

00:07:28 It’s quite nice that someone else has written your test for you. It’s wonderful.

00:07:33 Yeah. And if all of your tests are passing now, you can also you can pass in like run X fail and it just ignores all those decorators if you wanted to.

00:07:45 Like, for instance, I do that with some of our we’ve got like two cadences of test suite. We have the same code, but with our nightly tests, I run it with X with strict and then basically that’s our continuous sort of testing.

00:08:06 And then another one that’s a slower cadence where we’re purposely checking every specific version that we’ve changed the feature or we’ve changed something specifically. We know a new thing is in place on those. I want to know absolutely everything. So we use run X fail and we just turn all of those X fails off.

00:08:32 Because there are times where you really do want to know. You can find out anyway, like, even if the X fails are reported. But one of the issues that comes up and the reason why I fiddle with this a bit is a lot of CI tools have no idea what these mean. X fail and XPath just get translated into something.

00:08:54 They either get translated into a failure or a pass or a skip.

00:08:59 Do you have any issues with that or with working with CI?

00:09:03 Not really, but I’m also using pytest or something. So it’s not that big of a problem.

00:09:15 And the way I’ve used X fail, it’s mostly like, since I always run it in strict mode, I don’t really need to know the difference between X fail and skip and XPASS and a failure, because if it’s X failed or if it’s skipped, I don’t really care because in both cases it’s like either it’s skipped and that hopefully means like, all right, this test wasn’t even supposed to pass in the first place, right? It’s like this is a test, but it’s a test for something that doesn’t apply in this particular condition. Right. It’s a test that only is run on Windows, or it’s only run in a certain time zone or something like that, whereas if it’s exhale, it’s like this should be passing. So I’m going to run it anyway, but it doesn’t pass because there’s something wrong with the library and I want to fix it from the point of view of someone of the version of me who doesn’t care about that test, who’s just running the entire test suite.

00:10:19 I don’t need to make a distinction between those two cases. Right. It’s only when I’m adding the test in the first place that I want to make that distinction, or when something about that changes. Right. Like when I fix the bug, I want to be notified immediately. So for me, if it’s like a failure, then it’s like oh, something failed and I’ll just look and see what it was and it’s because this thing expats.

00:10:44 I suppose that you could make affirmative assertions about like, if you don’t actually care about the difference between XPASS and a failure, you could make affirmative assertions about failures in the code itself by being like with pytest raises whatever assertion error, value error or something like that, and then put your test in that block.

00:11:08 But I do think the downside to that is that you actually have to muck around with the test itself rather than removing just the decorator, which changes how the result is interpreted.

00:11:19 Yeah, and I don’t really like using the and then you can call X fail within the code also, but it behaves differently, so people should watch out for that.

00:11:29 I’m kind of annoyed with that. So the decorator runs everything.

00:11:35 But if you call like pytest X fail from a test, if it fails, it stops there. It’s like an insert, it doesn’t continue with the test, and that’s annoying to me.

00:11:47 I wish it didn’t do that.

00:11:49 So I would recommend people just forget that you can call Xfoil from within a test.

00:11:56 Yeah, I think the only time that I’ve tried to use XVl within a test, I do remember it being mildly irritating, but it was the one place where I’ve had trouble introducing Xfail is in the Setup Tools test suite, where essentially I think they’ve marked a bunch of stuff as X fail or skip. It’s not consistently used. Whether it’s like this is a bug that we have tests for that aren’t working, or whether it’s like this doesn’t work on this particular platform, but I tried to move it over to everything Xfail is strict and then just go through and try and update whatever I find to be Xfail.

00:12:46 But the problem is, Setup Tools has this very large matrix of supported platforms and combinations of supported platforms, and some of the bugs only are exhibited in very weird niche edge cases that’s really hard to express as a single conditional in the Xfail decorator. So what ends up happening is you end up getting a lot of X passes on like Windows with this version of Ms VC or MSCV or whatever it’s called, but not with this other version.

00:13:25 And then it just ends up being like or you have something where the bug is just flaky. It works sometimes, and other times it doesn’t work. So that’s one case where it’s really hard to make Xfail strict. Only I could just make those tests marked explicitly as strict equals false and make the default strict equals true.

00:13:46 But I just remember it being sort of pervasive. And the reason I brought this up in relation to calling Xval inside of the test is that a similar place that I’ve seen is when I’m using hypothesis tests. Right. And I have a whole bunch of cases that pass and then a couple of bugs that show up only in certain parts of the test input space you want to sort of conditionally X fail based on.

00:14:19 Okay, so I have this function and it works well for everything that’s a positive number and zero. But if it’s a negative number, I’ve got a bug in it and I’m not ready to fix the bug right now.

00:14:33 So then I can either use Hypothesis assume, I can call the assume function and then just have Hypothesis never generate those inputs, or just have Hypothesis skip those assumptions, or rather those inputs. But that is sort of like skipping the test instead of using X fail on it. So instead I’ll sort of say conditionally like, all right, if Hypothesis gave me this thing, X fail it, and I don’t think Hypothesis deals with that super well.

00:15:08 And I’ve seen some of the problems you’ve seen where it stops running the test or whatever. I don’t remember the exact problems with it. I think it might be better than the alternatives, but it could use a little bit of love. And hopefully this podcast and the blog post will convince people to use X fail enough that it will have a higher profile in both hypothesis and pytest.

00:15:36 Yeah, I hope so. I hope people would use it a lot, but I do find that using XFL at first I like to use it for my own personal stuff, but for group projects it was just more work than it was worth to explain it to everybody.

00:15:54 And also I personally don’t try not to put it on things that are intended to be fixed soon because it’s okay just to see those failures until we fix the bug. So knowing when to Mark something as X fail, it’s kind of a project per project sort of thing that you need to know. I definitely understand, like an open source project where you don’t want to fail the test suite, but you’d like to have the test in place that makes sense to leave some X fails there because somebody else can work on it and try to fix those. And it’s nice to have the test already there instead of having to where else would you put it? If you didn’t put it in there in the suite and Mark it as XVl, where would you put it? You’d have to put it on a branch or something.

00:16:42 I guess in the past what I was doing before was that people would raise an issue in the issue tracker and then I would say they would either have a nice minimal reproducer or I would minimize whatever I would reproduce the behavior and then I would minimize that to a very small reproducer and be like, okay, if this chunk of code were to pass, then that would be a fix for the bug right here’s, a minimal reproductive right. And theoretically, what you could do is you just copy and paste that into the test suite, add an assertion in it, and then that’s your test. So now I think what I’ll do is I’ll usually offer to the person and say, if you would like to add an X failing test to the test suite, please do. Because I’m always looking for ways to recognize people who contribute good issues because I think people see, I made a pull request and it got merged as like I contributed to the library, whereas I told someone about a problem is not seen as a contribution in the same way that a full request is, because it doesn’t involve writing code.

00:18:07 Yeah, writing an ex failing test is writing code.

00:18:10 Yeah, exactly.

00:18:13 That’s why I like to say, okay, you don’t need to make an excellent test. From my point of view, you’ve still made a valuable contribution to this library or this program by just giving me a good bug report, making it possible for me to reproduce it, creating a public artifact that people can find to know, okay, this is or is not the intended behavior of the library. All of those things are very valuable to me. But if you want to see your name on a commit or you just don’t mind and you like to be helpful saying, okay, well, why don’t you take your minimum reproducer, throw it in the test suite? I am acknowledging as the maintainer that this is a bug that we want to fix. So if you add the test and you Mark it as X fail, we’ll merge it, and then hopefully either it will get fixed by on purpose and someone will remove that XVl decorator or it’ll get fixed accidentally. And then we will have a regression test in place immediately because this bug doesn’t exist anymore after a refactored. Xy and Z.

00:19:20 So you brought up a setup tools and a couple of other projects. How many open source projects do you work on or do you keep track?

00:19:31 On average? It’s like closer to zero these days because I have young children and I don’t have very much time to work on these things, but I think I used to be very active maintaining setup tools and date util, and I’m still burst mode active in maintaining C Python as a core Dev, mainly working on daytime and zone info.

00:20:04 And then there’s a handful of other projects that I either have a commitment or from some small amount of work that I did at some point, or I just know the code base enough that I can make a couple of pull requests and the maintainers know me well enough that they’re going to look at me that we have a rapport, enough that I can get things merged. Probably easier than your random person off the street, which is maybe not fair, but maintain the time is limited and it’s always easier to be like, oh, this guy has made ten really high quality pull requests. I’ll review his pull request because it’s probably not going to be a ton of work to get it merged.

00:20:52 Yeah, okay, cool.

00:20:56 I have a fairly decent track record of having defect reports get fixed or get a snarky reply because people already know my personality and know that I’m not going to be offended, I think.

00:21:14 But I think partly because I was just thinking about that being able to write a good defect report. So you said like a minimal reproducible code to reproduce the defect. And if you can write that as a test, it’s great. One of the things and instantly somebody can work on that project. Like if you write an X failing test as part of a defect report, that would be cool, because the person that wants to fix it doesn’t have to go out and write a test or figure out how they can just run your test and then they see the failure and you can start working on it. That would be cool. But the same sort of thing is what is that minimally reproducible process and that’s being able to come up with a small example because people usually don’t run into the problem because they have a small example. They have like a big thing that they’re working on and they run into a problem and then to take that extra time and try to boil it down into a small example of how this fails.

00:22:25 It’s very welcoming, right?

00:22:29 Yeah.

00:22:31 Matt Rockland has a really good blog post on this called Craft Minimal Bug Reports, and I actually linked to it from my how and why I use Pythest XL blog post. And it’s basically the same thing as Stack Overflow says as well. Right. Which is that the more work you can do to make it easy for a maintainer to just reproduce your problem, see exactly what the problem is, and then get as close as possible to the source of the problem, the better. Right. So in some cases I know that there have been a couple of bugs in Pandas where I actually cloned pandas and I had a mineral reproducer. And I also knew it was a regression because I knew that something used to work and I actually cloned pandas. And then I did a get bisect where I found the exact commit where the problem broke. And then at that point, the question was, do I make a pull request to fix it, or do I make an issue? If it’s something very simple, you can make a pull request. But I think at that point, I was like, well, this is in some part of the code base that I’ve never touched before, and I don’t know what trade offs need to be made. So I just wrote up an issue that’s like, here’s a minimal reproducer, here’s the commit that it broke on.

00:23:54 And here is what I think you should do. I think maintainers that’s a lot of effort to go through. But I think the key is that maintainers will see what effort you put in, and you should sort of put in as much effort as you feel comfortable putting in without having to and just try and estimate where the trade off is between. Like, all right, I don’t know this like, someone who knows this library really well will be able to do the next part of fixing the bug in ten minutes, even though it would take me 2 hours. So maybe that’s the point where you hand it off as an issue. Right. Whereas if it’s like anyone can do this or spend 30 seconds doing it or spend five minutes, maybe I’ll spend ten minutes.

00:24:41 Or maybe you say, I really want this bug to be fixed.

00:24:45 So I’ll spend an extra 20 minutes writing a really nice bug report when I know that the maintainer could spend five minutes, you could spend five minutes finding out the same exact information.

00:25:02 And you can also keep in mind that there’s a multiplier there. Right. Because there’s probably 20 users who can write bug reports for at least 20 users who are writing bug reports for every one maintainer for some of these projects. Right.

00:25:19 And like you said, a lot of times it’s volunteer time and people have kids and lives and jobs. So if you can make it to the point where, especially if it’s important for you to get it fixed, if it’s stopping your progress on something and it’s a problem, you kind of want to get it fixed. Right. So if you can make it so that the maintainer can look at it and go, oh, I know exactly where this is. Or, Oops, I forgot something. I intended to do something different or whatever.

00:25:55 If it can boil it down to just an hour or so of work on their part or less, it’s a great benefit. You’ll probably get it fixed really quick.

00:26:05 Yeah.

00:26:06 I think my hope is that the way I’ve got this blog post structured, it’s aimed at both people who want to make these sorts of bug reports and people who want to accept PRS that have X fail. I mean, I guess it’s mainly more like why you, as a maintainer of a code base, should want X failing tests in your code base.

00:26:28 But I think it’ll be useful on both sides. Right? Because if you’re a maintainer and you’re convinced of this, you can say to people who come to your library with bug reports, hey, do you want to make a PR? You can get a commit with your name on it, making this exhaling test.

00:26:46 And if you are the kind of person who’s writing these tests, I suspect that on average, if you start with a PR instead of a bug report or something, if you make a PR that has an X failing test in it, most people are going to be like, what is this? Why are you adding broken tests to my test suite? Of course I’m not going to merge this.

00:27:06 Okay, so the better thing would be start with a defect offer to make a test, maybe possibly in that defect report, or at least in the conversation if somebody admits that, that probably shouldn’t happen.

00:27:24 When should you propose doing an X failing test?

00:27:27 I mean, if it’s easy enough, you really can just open the floor request with the Xfilling test and be like, you don’t have to merge this. But I just thought I would use it as a demonstration so that you can look at it on CI or whatever.

00:27:38 So you’re going to link this too? You’re going to link the default report and the pull request together.

00:27:44 Yeah, exactly.

00:27:46 You can make it right away, or you can say or you can just start with like, just put the Xville test in the issue, particularly if you don’t care about seeing it run on CI right away. And then you say, Would you like me to add this to the test suite? I’m happy to do that.

00:28:03 And then you can also link to my blog post and say.

00:28:06 Like, false is to do this.

00:28:08 Yeah. Or just I’m convinced by what’s in here. And honestly, that’s half the reason I blog for these things is that I end up spending a lot of time explaining to people, oh, you shouldn’t be invoking setup, Pui directly or subtests are a good thing. Or this is how date time arithmetic works, right? And after you explain the same thing two times, three times, four times, you start kind of wanting a canonical document where you can be like, here’s something that will take you 15 minutes to read, and it’s the general argument for it.

00:28:45 Well, I appreciate this because I talk about it, but it seems a little weird for me at work to say, you should go read my blog posts about why we should do this, but I have no problem with linking to yours, even at work, and saying oh, this is why we’re expanding things. Go read Paul’s blog post. It’s good. So thank you.

00:29:10 Well, I mean, I also don’t want to people like to be like, oh, you’re a setup tools, maintainer or C, Python, core Dev.

00:29:20 So we should listen to you. And I want to say, look, I’m just like a guy. And that’s one reason why I write these sorts of blog posts, because I try and put my reasoning there so that independent of whether or not I am any sort of authority. I mean, you can certainly use it as a heuristic as to whether or not to spend ten or 15 minutes reading my blog post. But you should also evaluate for yourself. This is just something I was convinced of, and I want to lay out the argument in such a way that people understand it. They have it framed well. And then they can make a decision. Right. They can say, well, we don’t really want to put 100 X failing tests in our test suite every single time.

00:30:04 We don’t want to spend the CI resources on it or whatever. And that’s a choice that you can make. And then other people will say, oh, this would actually be very useful, and it’ll be a great way to engage our contributors. And I think both of these outcomes are fine as long as you have made them.

00:30:24 You’ve been exposed to all the best arguments.

00:30:27 It also doesn’t have to be a large thing. So we’ve talked about it for a long time. But one of the projects I’m working on is we’re using X fail. We’re using it with I’ve talked about before using the strict equals true, so that we know when things start passing and with.

00:30:50 And then at times when we’re close to releasing, we just turn off all X fails. But it’s a small number.

00:31:02 I think one of the projects I’m on, there’s currently three tests that are ex failing out of thousands. We’ve got thousands of test cases. There are three that we’re watching, and that’s really what it is. These are ones that we’re watching. We’re watching to see if they’re fixed, and we want to get them fixed quickly.

00:31:22 If it’s something that we’re not planning on fixing because we’ve just made a decision that we’re changing, it’s not important because of financial reasons. It’s too much of a corner case. Then we’ll probably modify the test suite to define the behavior different. And we just know that this behaves this way. Now, I don’t know if I’d be comfortable with a code base with hundreds of X fails, but I don’t know.

00:31:56 Have you used projects with, like, lots of X sales?

00:32:03 I’m not sure. I mean, I think anything that’s going to accumulate that many X sales is probably going to be distributed enough that it’s.

00:32:13 Right. So the place where I could imagine seeing hundreds of X sales. Right. Is in like the C Python standard library or even like Pandas or NumPy or something where you get these bugs and you have multiple maintainers who are all responsible for their own various individual areas. And there’s not necessarily continuity of maintainership.

00:32:38 There are C Python bugs that I’ve seen fixed a lot of them. They get reported and then they get fixed 6810 years later.

00:32:48 And yeah, having X sales for all those bugs would probably spend a lot of time carrying around CI resources spent on something that is just really not going to accidentally fix itself.

00:33:04 But I’ve also seen plenty of times where it’s like, hey, I think we actually fixed this when we switched from Python 27 to Python three.

00:33:13 We just looked in, we saw that this was a bug. We didn’t investigate what the cause was. Turns out the cause was something in a dependency, and then the dependency got fixed.

00:33:24 These sorts of things.

00:33:27 You can actually end up spending a bunch of time when you go to investigate the bug only to find out that it’s not reproducible anymore.

00:33:36 Also large projects like C Python, for instance, or other Pandas.

00:33:42 Like you said, there’s a lot of maintainers, but there are some of them that might be willing to touch any part of the code. But like, for instance, you said on C Python you kind of focus on date util or date time. Yeah.

00:34:05 And that’s true for a lot of projects where the area of focus where somebody is looking at is a narrow, focused area. So they’re going to keep track of the kind of bugs that are in their area however they want to. So they’re kind of like little projects within big projects.

00:34:23 Yeah. And I’ve seen NC Python situation recently where something in the test suite itself of date time.

00:34:35 I think there was something in import Lib where not import Lib in the test support for importing modules. It comes up when you have a C module and a pure Python module, sometimes you’ll want to use one or the other and you have to mess around with the state of the global state that keeps track of which modules are imported. Like, you have to delete certain modules and then try and clean up after them.

00:35:05 And there was some issue with how certain modules were getting frozen or how they were getting deleted and re added in.

00:35:13 And it came up in a couple of different it had multiple manifestations throughout the code base, and then someone fixed it, and then they went through. And then I was like, well, I think this also closes this other issue.

00:35:29 But if I had had X failing tests, they were like, this is supposed to be working, but it’s not.

00:35:36 That person would have immediately when they went to go fix their own issues, they would have been like, oh, this actually closes this other issue as well. And then they would close both issues, remove the X failing test, and it turns into a regression test and I’m not going to see that impression.

00:35:49 That’s nice actually. Okay. So I take that back.

00:35:52 Put in X fails as much as you can.

00:35:55 It would be nice.

00:35:56 Yeah. The main question is resources, right? Like if these tests are very resource intensive or they’re a substantial portion of your test suite and the test suite is taking up a lot of CI time, then you may want to make some decisions. I suspect that you will not get to that point. I suspect that even someone who wholeheartedly adopts this method is not going to see more than 5% or 10% of their test suite being used on Xville.

00:36:30 Yeah, I agree and it’s up to you. But one of the things that X fails do is they make you think about your test suite more because it’s a dynamic living thing. It’s not a static thing that you write a test and forget about it.

00:36:44 And also you have to talk about it more. People have to understand the test suite and talk about it more, which is a good thing.

00:36:52 So that’s good.

00:36:54 I’ve got tons of questions for you so I hope you willing to come back. You are going to write about the workflow so we definitely need to talk about that sometime.

00:37:03 And I’d also like to talk at some point about your experience as a core contributor to Seat Python because that sounds interesting.

00:37:12 But thanks so much Paul for coming on the show today and talking about this.

00:37:17 Yeah, no problem.

00:37:19 I definitely enjoyed it and I think I have a couple more XFL blog posts coming out in the near future and I’m always happy to talk about that kind of thing.

00:37:29 Talk to you next time.

00:37:31 Thanks.

00:37:37 Thank you Paul. Always fun to talk testing with you. Thank you PyCharm for sponsoring this episode. Visit testandcode.com PyCharm for a four month free trial of Pycharmpro savettime use PyCharm. Thank you Patreon. Supporters join Them at testandcode.com Support seriously both longtime supporters and new supporters through Patreon help to let me know that this podcast is worth continuing and I really appreciate it.

00:38:02 Thanks.

00:38:03 Thank you. All of those links including links to Paul’s blog post in previous episodes covering Xfil are in the show notes that can be found at testandcode.com. That’s all for now. Now go out and test something and if it fails consider using xfail.