pytest 6 is out. Anthony Sottile joins the show to discuss features, improvements, documentation updates and more.


Transcript for episode 125 of the Test & Code Podcast

This transcript starts as an auto generated transcript.
PRs welcome if you want to help fix any errors.


00:00:00 pytest six is out, specifically Six Dot zero dot one as of July 31. I’ve been using it for personal projects since the release candidate in early July, and there’s a lot to be excited about. And I’m pleased to have Anthony Satellite here to talk about by Test Six with me. So thank you, Anthony. This episode is sponsored by Datadog and by listeners like you that support the show through Patreon. Thank you.

00:00:39 Welcome to Test and Code, because software engineering should include more testing today in the show. Anthony. Sotilly, but I was looking back.

00:00:54 So on episode 82, we had you on to talk about lots of pipe test features, episode 90. We talked about Dynamic Scope fixtures and episode 104, the top 28 Bytes plugins. So that was July, October of 2019 and March of 2020 when it was this year.

00:01:21 Feels like it was a century ago.

00:01:23 Well, I may have recorded it earlier. That’s when it was released.

00:01:29 That was like right before the apocalypse.

00:01:33 Yeah, it does seem before the four marches.

00:01:38 It seems like a long time ago. Before the four marches.

00:01:46 I heard Blurs Day. So every day is Blurs day now.

00:01:52 Sure.

00:01:54 So I can’t tell from the lighting.

00:01:57 You had purple hair on Twitter recently. Was that just a temporary thing?

00:02:02 I do currently still have purple hair.

00:02:04 Okay.

00:02:05 My camera doesn’t pick it up very well, but it’s very purple in real life.

00:02:09 Did you do that yourself or did you go out and have it done myself?

00:02:15 It was part of one of the subscription incentives for a stream that I did. I tried to do a 24 hours stream, which is I was a little overly ambitious about that. We got, I think, around 16 hours of content from that. But one of the things we did was we dyed my hair. I also got my nails done.

00:02:35 I’ve taken the nail Polish off, but we also did Korean face masks, which was fun, but it was a wild night.

00:02:43 So you said we were there somebody else doing this with you?

00:02:46 Yeah, my girlfriend was here with me, but also Chat, the collective that is Twitch chat.

00:02:53 Okay, so it’s been a while since people, I don’t know, maybe haven’t heard those past episodes or maybe they have.

00:03:02 Let’s do a quick introduction for you. You’re one of the Pipes core people, but you also do Twitch, so you do a lot of stuff.

00:03:12 So what do you do on Twitch?

00:03:17 My Twitch channel is Anthony Wright’s code, and I spend a lot of time working on basically Python software, but also related tooling as well. So I maintain a bunch of open source things, so Pytech is one of them. But I also maintain Talks Flakegate, Dead Snakes. I created Pre commit. I work on Pie Flakes. I don’t know. I have like 50 other random tools that I also work on, but I usually just develop Python code and interact with chat and try and provide an educational experience.

00:03:51 When you say you started precommittee, is there somebody else mostly maintaining it now then or no?

00:03:57 So I’m a private maintainer. There’s a couple of other people that have commit access, but it’s basically just me.

00:04:06 There’s also Chris and Max and Pen who have access.

00:04:11 I can’t believe that, like, how much of a large percentage of the tools I use are things that you do.

00:04:21 Yeah. I mean, it kind of works out that way. It ends up being like I use a tool and I send a bunch of fixes and improvements and maintainers are like, oh, well, do you want to help maintain this? I’m really bad at saying no. So I end up on the list for a lot of things.

00:04:38 Yeah.

00:04:39 Okay. Well, one of those things is pytest.

00:04:44 So then that’s what we’re going to talk about today, because I’m Super excited that Pytech six is out and I kind of like the way they did it this time. So in early July, they did I think it was like actually 4 July or something. Maybe it wasn’t. Was it the 4 July?

00:05:00 It was right around the 4 July. I was actually away on vacation, so I missed the release candidate released.

00:05:06 But they did a 60 release candidate, which actually caused a little bit of confusion because if you did an update, you didn’t get it, but you could still specify the release candidate and there’s a couple of ways you can get it. But anyway, it was available for people to try and it’s still kind of cool.

00:05:26 Now we have 6.0 .1 that as of July 31. That’s the most recent.

00:05:32 And like I said, I’ve been using it since early July, and for the most part. So one of the things I wanted to tell people was how to upgrade what to do if they’re using 5.4 earlier. So it looks like the last one was how funny. It’s just backwards order.

00:05:56 I think at work we were using 541 at the time.

00:06:02 So what I did was I ran my test suite just to verify that it was still running. And then I did an update to six, like the normal install capital U to update my test. And then I ran it again and made sure that everything was still working. Right.

00:06:23 One of the things I want to tell people about is there’s a couple of cool flags that some people don’t run with. But I like too is the dash capital W error, which turns I think I got that right. Which turns warnings into errors.

00:06:37 Yep, that’s right.

00:06:39 And then there’s also this because there’s a we’ll talk about this later. But there’s some things that have been deprecated that are going to come out in 6.1. So I think it is important for people to update and make sure that they’ve got all that stuff figured out before the update to 6.1. I think this was a gradual, like a nice, sane, gradual rollout of the bytes, six features and deprecation.

00:07:12 Yeah. I actually think it went really smoothly. And we’re actually adjusting our release process because the release candidate stuff went so well. We found a bunch of issues from people just trying out the release candidate before it launched. And so I think next time we do a large set of breaking changes like this, to be fair, there weren’t actually that many breaking changes, but the next time we do a release like this, we’re definitely going to go through release candidates as well. We might even do them for minor releases as well. It actually wasn’t that much overhead to make it happen.

00:07:46 And the really cool thing about most of the Python ecosystem is that you can release a release candidate, and most tools will not pick it up automatically. You have to manually opt into upgrading to it, which again, is why there were so many options to upgrade to it.

00:08:03 Each tool has to have their own way to opt into pre releases.

00:08:09 But once you run the right command, you can test out early and early and often, I guess.

00:08:15 Yeah.

00:08:18 If you go to that, we’ll put a link in the show notes for the release notes, and the release notes are actually I’m just going to I don’t know why I don’t have them pulled up right now. I do have them pulled up. They kind of go in the order of I’m going to scroll down to the release candidate release notes. That’s where the bulk of these changes are.

00:08:38 They do breaking changes and then deprecations and then features and improvements, bug fixes, improved documentation, and trivial internal changes.

00:08:53 Now, I know that a lot of people, you totally should skim through the breaking changes just to see if things are if they break for you, but I think just run it focusing on things that broke.

00:09:10 It isn’t things that broke. So breaking changes are really just changes in the API that may affect you, right?

00:09:16 Yeah.

00:09:17 Okay.

00:09:18 In most cases, our breaking changes are more for plugin authors than they are for end users.

00:09:25 I think your proposal of changing the order of these actually makes a lot of sense, and we’ll probably discuss this internally.

00:09:32 Oh, I was just thinking for this episode, I’d like to rather go through new features and improvements, because that’s what I’m excited about.

00:09:39 Oh, yeah, perfect. Let’s do that.

00:09:41 New features, improvements, and then go through some of the rest of them.

00:09:44 Yeah. The other thing that we’re looking for feedback on is we split the change log from 60 zero to the release candidate so that it was easier to see which changes happened in which exact version.

00:09:56 But I realized that a lot of people probably don’t actually care that there was a lease candidate, so we might squash those two together next time.

00:10:04 We didn’t really decide that, and this was our first release candidate. So if people have feedback on that, let us know in the pytest issue tracker and probably link the particular issue as well.

00:10:14 Yeah, I don’t know if it affects anybody except for serious Pintus nerds like me, but when Bruno announces like or Bruno or somebody else announces often Bruno that there’s a new release out, he links to the change log. But for some reason, about half the time the change log doesn’t have the new version in it yet.

00:10:41 Yeah, there’s a race condition in how Read the docs builds our documentation, and so sometimes it’s up to date when we press the send button on the email, but sometimes it’s not okay and sometimes they build it and then delete it. So it’s a little bit fiddly, but much we can do there.

00:11:01 I didn’t really know what was. I just brought it up because I wanted to curious about that. So the docs are being generated by Read the docs and just kind of depending on the workflow there.

00:11:12 Yeah, we basically have a STEPBYSTEP. Well, actually the step by step thing is mostly going away. We’ve mostly automated the parts, but the email part, I believe is still manual. And so depending on the timing, it’ll either be up to date or not by that point.

00:11:26 Okay, so basically, people like me, I’ve been thinking, don’t freak out.

00:11:33 The change log is probably going to be there tomorrow, so just wait. Yeah.

00:11:44 Thank you Daddy Dog, for sponsoring this episode. Are you having trouble visualizing bottlenecks and latency in your apps and not sure where the issue is coming from or how to solve it? With Data Dog’s end to end monitoring platform, you can use their customizable built in dashboard to collect metrics and visualize app performance in real time. Data Dog automatically correlates logs and traces at the level of individual requests, allowing you to quickly troubleshoot your Python applications. Plus, their service map automatically plots the flow of requests across your app architecture so you can understand dependencies and proactively monitor the performance of your apps. Start tracking the performance of your apps, sign up for free and install the agent and Datadog will send you a free Tshirt to get started. Visit Testingcode.com Datadog One of the things that changed I’m really excited about, but it’s a little tiny little feature.

00:12:41 It’s that like there was something going on with if you testing plugins or I have a section of the code section of the book talking about building a plugin.

00:12:52 And one of the things you do, you can use this for other stuff too, and I do use it for non plugin stuff, but a plug in author often wants to test their plug in by running Pi test and checking the output, and that process often is with the Test path. I don’t know if you can do it with Test Path, but at least the tester plug in.

00:13:17 The tester plugin is the best way to approach that.

00:13:19 Okay, tester fixture, and then you call run by test, and there was a thing in there that would issue a deprecation warning.

00:13:29 Oh, yeah, that warning was so annoying.

00:13:33 So the workaround was to hide that error or that warning, but it seemed kind of hacky.

00:13:42 But anyway, I was super messed up with that warning. So what was triggering? It was pytest internally was looking at all of the attributes of an object when it didn’t necessarily need to, but it was part of test discovery and it was Pi test triggering the pytest warning. So it wasn’t actually the fault of plugin authors.

00:14:03 Unfortunately, there wasn’t a good way to fix that without rewriting a bunch of the internals and so the warning just kind of will go away when we delete that attribute.

00:14:14 But the reason it stopped in 60 is that warning has been turned into an error, and pytest is ignoring that error now.

00:14:22 Oh, really?

00:14:23 Yeah.

00:14:24 Okay, it’s completely gone in six one. You can actually still see the warning if you downgrade the error to a warning, but it’ll go away completely in 61.

00:14:33 Plugin authors won’t have to do anything for that.

00:14:36 Okay. But in the meantime, there’s enough of a workaround in 60 that I don’t have to worry about it anymore, right? Definitely. Okay, cool.

00:14:49 But I guess if I turn warnings into if you turn warnings into errors, it still shows up or the other way around.

00:14:55 The other way around. So if you downgrade the errors to warnings, it’ll show up.

00:15:00 Okay.

00:15:03 You make it less severe and now you notice it.

00:15:06 But it’s like, do I really need to teach people about this? Because it’s like such an inside baseball thing to have to talk about, to just tell people how to test a plug in?

00:15:22 But anyway, this is an unfortunate thing.

00:15:25 I’m actually not sure how it slipped in because our test should have caught it. This was one of these cases where the warning stuff is a little bit fuzzy, and so it got into Pi test and got released without anyone noticing that this warning was suddenly triggering for all the plugins. Next, I’m going to try and do this a little bit better.

00:15:44 This is probably nobody else cares about this, but was it because of some of the reworking of the Terminal Writer or Terminal reporter all that, do you know?

00:15:59 So we did do some refactoring of that, and that caused us to notice that one of the attributes were just like, not really used or not used in a way that was sound. And so we deprecated an option in the five. I think it was five one back when it got deprecated.

00:16:16 But yeah, it’s an internal thing that shouldn’t have ever been exposed publicly.

00:16:22 Okay, well, I’m glad that at least there’s a working work around. Now to the more exciting stuff of some new features.

00:16:34 There’s a whole bunch of new features. I just pulled out a few that I’m kind of excited about.

00:16:42 I haven’t used this yet, but. Well, I guess I’ve tried it.

00:16:46 One of them is that pytest now supports Piproject Tommy for configuration files. So the normal way for people if you just have a bunch of tests and that’s really it, you’re not testing something else. It’s not part of a package or something. It’s just like, for instance, at work we’re using pytest to test instruments, so we don’t have a package there or any other stuff. It’s just a directory of tests we’re running.

00:17:10 So Pi test any is the right way, I think the right way to do that. We could potentially use a Pipe project tumble, but there’s a whole you can also use a talks any file or set up. I think it’s set up CFG. I use setup.

00:17:26 Cfg myself, but.

00:17:31 The idea is if you’re already using one of these other configuration files, you can put your bytes configuration in there, right?

00:17:40 Yes.

00:17:41 So instead of making you have a separate any file or something.

00:17:45 And I like that a lot. Also talks any files. I think maybe I already mentioned that, but it’s neat.

00:17:53 Pipe Project if you’re using Flip, for instance, or Poetry or one of the other or black, you now have a pipe project Tamil file and you can use that to do configuration. The pipe test. The thing I want to warn people about is the tumble syntax.

00:18:12 It looks similar, but it is slightly different than any file syntax. So just look that up.

00:18:19 For the most part. I just make sure that I use quotations instead of leaving the strings as bra strings.

00:18:27 Yeah, because Tamil has actual type and you have to quote some strings there.

00:18:32 Yeah, but the documentation shows it’s pretty easy.

00:18:35 And the other thing to look out for is the presence of the pyproject. Tummy file will also change how Pip installs your package. So just like a little thing to look out for, but usually it’s not a problem.

00:18:47 Yeah, well, one of the things yeah, I think I covered that with Brett Cannon when he was on. But the thing that it’s the placement of where it is that gets me, though.

00:19:01 For instance, if I’ve got a pipe test any file, I’m probably going to put that in my test directory.

00:19:09 So that’s going to be one level below where your Pipe project tumble is, right.

00:19:17 You can still use a pipe test any file instead, right?

00:19:20 Yeah, of course.

00:19:22 The other thing with Pipers Pipe Project usually goes at the root of your project, whereas the pipest any file can go basically anywhere you want.

00:19:30 There is a setting in pytest that allows you to set the router so you could place it in the root of your project and say like, your test router is some nested thing. And that would get you the equivalent of placing the file in that other nested place.

00:19:46 Okay, so what do we got next?

00:19:50 This is my favorite one that we’re going to talk about.

00:19:54 So pytest now includes inline type annotations and exposes them to user programs.

00:20:01 So we’ve gone through and type annotated all of our functions. Well, I say most of this was Ran, one of the newest cordets, and Ran has put in like a monumental amount of work to make this happen, and we’ve actually done like a lot of internal cleanup as well, just because we were type annotating our code and noticing like, oh well, this case is actually impossible because the types tell us it’s impossible.

00:20:30 But yeah, all of the user facing fixtures are now type annotated.

00:20:35 We’re still working out how to expose these in a nice way such that people can use them without importing underscored names.

00:20:44 But all the types are there, so you can start testing out the APIs and like, making sure that your particular Python type checker is happy with this. I use my pie. There’s also the one from Google, the one from Microsoft, the other one from Microsoft, the one from Facebook.

00:21:03 There’s a whole bunch of implementations of type checkers, but I think this is a huge step in the right direction and sets a really good precedence for the rest of the Python community to make type annotations part of the API and give people a good way to start type checking their code.

00:21:20 So what does this give us that we didn’t have before then?

00:21:24 So before when you used pytest code and you were either writing a plugin or writing test or doing anything like, not super simple, there was no way to do static validation of whether does this attribute exist? Am I getting the right types from this, or like, is my code type sound when dealing with pytest? Because most type trackers would just assume that all functions were any which any has all attributes and is the instance of all types, and it’s called top type bottom type.

00:21:58 I’m out of my depth, but any is basically like a squishy object that means anything.

00:22:04 But now that we have actual types exposed, you can do static validation of am I calling this correctly? Is this the actual API that’s expected here? Am I using the objects correctly and you can do that all statically before you actually run your test suite?

00:22:21 Now, I would argue I go back and forth on this, but I would argue that it’s actually less useful to type annotate a test suite because you’re just going to be running it, and if you run and it fails because of a type error, that’s still going to be a test failure. It’s not going to be like a production issue that like, oh, I accidentally misuse this object, but there are cases where typing your test will actually improve your test suite. One example that I found is if you’re calling a function in user space code that takes a particular type, but your test is always sending none.

00:22:58 You’re not actually testing a real scenario that would happen.

00:23:01 It’s often good to notice that in your test and adjust that.

00:23:05 But I think the biggest one for actual consumers here is going to be for plugins using pytest primitives and they’ll be able to do better validation that they’re using the objects correctly.

00:23:18 Yeah.

00:23:19 Okay.

00:23:20 Also, just for plug in authors, sometimes the documentation isn’t enough and you got to look at the source code.

00:23:30 Yeah, for sure.

00:23:31 And so the source code is going to be more verbose because it’s got the types in there and that’s nice.

00:23:37 And you might not have to dig as far.

00:23:40 Another cool thing that you’ll get on this, speaking of having to dig into documentation, is if you’re working with an IDE like PyCharm or Vs code to some extent, it will be able to pick up these type annotations through the language server and you’ll be able to get really good tab completion and have the attributes at your fingertips without having to either dive into the sentient signature or the code or the dogs.

00:24:05 Oh, that’s cool.

00:24:07 Yeah, very cool.

00:24:13 Yeah. Because PyCharm is really working on I assume Vs code is also. But I pay more attention to my PyCharm lately of trying to support fixtures and everything. So the built in fixtures, hopefully it’ll help you be able to use them better.

00:24:31 Yeah. Pycharm also has their own type checker, which is great too.

00:24:36 Well, I guess, you know, you’ve made it as a company if you’re writing your own Python type check.

00:24:43 True, it’s true.

00:24:45 Now the next one this was funny when it got announced because there are some people that are not that excited about it.

00:24:52 There’s a new couple new command line flags, no header and no summary. So when you run pytest, normally you get some stuff at the top to tell you what by test version and what plugins you’re using and things like that. And then at the bottom there’s a summary of summary report.

00:25:13 You can turn those off. Now what the gripe with having that ability? But the downside is if people use that all the time sticking in any file or something like that, then if you don’t do that because when you try to submit a report or defect report or something, nobody can see what’s wrong.

00:25:39 Yeah. It’s going to be harder for us to see if people submit bug reports with no header for instance.

00:25:45 Yeah. So why I’m excited about it is for people that write about my test.

00:25:52 I’ve had to write my own thing because if I’m showing a whole bunch of examples of just running, having that same information repeated over and over again is just taking up space on the page.

00:26:05 So in the past I’ve had a script that I would go through all the markdown files that had code samples in there, code runs and strip all that stuff, and now I don’t have to do that. I can just stick this somewhere and it will be there.

00:26:23 Yeah. I was actually kind of excited about no summary myself. There was kind of a way to simulate it in old PY test by passing a particular dash R flag.

00:26:33 But this is a nice concise way to just be like, no, I don’t ever want the summary. Just like, leave it out for me, please.

00:26:41 Yeah.

00:26:43 I was excited about the no header, the no summary. I don’t get why you would want that.

00:26:49 So one reason is we recently changed the default of the summary output to include warnings and errors, and it gives a few more lines at the bottom of the terminal. But you also see the test failures right above it.

00:27:05 My argument for it was that when I’m running a test, I want to see my test failure as close to the actual terminal that I’m typing in as possible. And I would have to scroll up five or six more lines to get past the summary to start seeing that error.

00:27:25 Okay.

00:27:26 Yeah. You still see the failures then, right?

00:27:29 Yeah. You still see the failures.

00:27:31 Okay.

00:27:32 And the little summary line of like, how many passed, how many warned, how many failed. Stuff like that still there.

00:27:39 Yeah.

00:27:39 Okay.

00:27:40 This is actually pretty cool. Yeah. I’m glad this is here.

00:27:45 What do we got next?

00:27:48 Looks like next, we’re talking about the new warning when there’s an unknown key in the configuration file.

00:27:56 This one, I think, is long overdue, but it’s also kind of annoying. And we’ve already gotten a lot of reports of people being like, oh, I got this output and what’s going on here? So I think we actually need to improve the messaging here a little bit.

00:28:11 But basically, if you have a config file and you’re trying to configure a pytest option and maybe you typo it, or maybe you forgot to install a particular plugin Pi test will now tell you that that option is not actually an option.

00:28:26 I wish I gave you a better hint at like, oh, well, this might come from a plugin or this might be a typo, but we’re open to improving the messaging there, but basically it should reduce the like, I added this option, but it did nothing kind of problem.

00:28:43 Yeah. And the syntax that catches me sometimes because the.

00:29:01 Command line flag and sometimes it’s the other way around. So there’s sometimes some inconsistency there.

00:29:07 Yeah, I think that’s good. And then apparently it gives you a warning, but you can turn those with strict config. You can turn those into errors.

00:29:17 And then I didn’t conclude this, but I think, oh, there it is. The next one. This will help a required plugins configuration option, because like you said, sometimes plugins can actually add options, too. So if you didn’t have a plugin installed, then that option would look like a mistake. And you’d get a warning, but it might be that I think this is great. So what was the behavior before?

00:29:53 There is basically like two things that you would run into. The one that I would see the most often is like you would be using adopts in your any config to add the extra options. And pytest would just be like, I don’t know what this flag is. Error every single time you’re on pytest.

00:30:10 But now you can say required plugins, pytest exits. It might just be Exodus. I forgot the actual syntax. And it’ll say like, oh, well, you haven’t installed this plugin yet. Here’s how to install it and stuff like that.

00:30:24 Okay.

00:30:25 Actually, I’m totally going to start adding this because there’s a handful of plugins we use and it’s wrong.

00:30:34 What probably happens we also have like a little requirements file for people to install all the plugins.

00:30:41 And what probably happens is people don’t know they have to run that and they just install by test and start running.

00:30:48 I do that a lot. Forget to install a plugin.

00:30:52 Yeah. So this is good, especially if you’re using plugins as not just like a nice features or features that are just decorative. And there are some of those, but by test emoji, I don’t think your test we would change. But there’s other things that plugins are definitely a required part of the test strategy. So this is a good place for that.

00:31:20 Yeah, for like timeouts or distributing tests or any of the plugins that provide fixtures as well. But yeah, there’s all sorts of plugins and I think required plugins is a really good feature that again, a lot of these are like, wait, we didn’t have that already.

00:31:37 Yeah.

00:31:40 Time out is one that I just always install now, for instance, because I mean, I guess for a lot of software only projects, there’s a possibility that you wouldn’t need it, but the code can do infinite loops. So having an option to say nothing should go longer than this, kill it. If it does, it’s a good thing.

00:32:03 No reason it tests should take ten minutes. Right.

00:32:08 Well, okay, yeah, I’m laughing about that, too, because I know quite a few times that take more than ten minutes.

00:32:16 Well, so that’s the end of the list of new features that I had. Is there any other new features that you wanted to talk about?

00:32:24 You covered the one that I was most excited about, which is the type annotations. We’ll actually talk about one of the small breakages later, which I actually think is a feature more than a breaking change, but we’ll get to that one later.

00:32:39 Okay, so the next category is improvements. And my favorite improvement is that you can pass the output, you can pipe it to things like Lass and Head and stuff like that.

00:32:56 Supposedly it’s things that other processes that close the pipe after they’re done with them, and that was causing a problem with bytes.

00:33:08 Actually, I ran across this and it’s a jarring error that happens.

00:33:16 So this is fixed.

00:33:18 It was around the problem did occur in five, four, I guess.

00:33:23 And I don’t know how long the problem has been there because I could have swore I was passing PY test output to things like Ed and things like that before.

00:33:33 So the problem has always been there, but it’s sometimes flaky based on how a process consumes the buffer.

00:33:41 This is actually a really annoying thing about Python in general and how it treats pipe failure on the standard streams.

00:33:50 I would actually argue that the standard streams should eat the pipe error and just like consider that the end of the file.

00:33:57 But basically every tool needs to implement the same change that pytest had to do here. And I know that I have like 40 tools that don’t do this and so they run into exactly the same problem.

00:34:11 But basically what happens is the process that you’re sending your output to will terminate. Like maybe it only needs to read a few lines from input, or maybe it crashes or something, but it will stop reading the input and when that process closes, it closes the standard in stream and that closes the standard outstream of pytest, and then pytest tries to write the standard out and gets sick pipe, which is the Unix signal, the positive signal that says like okay, this pipe is closed, you can’t actually write to it, and Python by default treats or turns that into an IO error, and that’s where you would see that as an end user. But now we’re just basically catching that IO error and saying ignore it. This is just how Python does streams.

00:35:01 Okay.

00:35:03 Yeah. Anyway, so you can mostly use pipest like any other standard out thing and pipe it’s output to stuff now, which you could before. I mean, you could pipe it to Grep. Grep didn’t do that.

00:35:17 Yeah. So one thing that you could do to cheat this in older versions is you could pipe pytest to cat and that will force it to flush the entire output into cat, and then you could pipe cat into some other executable that just forces it to read the entire standard out.

00:35:36 Okay.

00:35:37 Yeah, it totally makes sense if you’re passing it to like head ten or something.

00:35:44 Just read the first ten lines. It doesn’t need to read the whole thing. Of course it’s going to just close after that.

00:35:51 Interesting.

00:35:52 But yeah, it’s way better now.

00:35:54 Okay, so this one I’m curious about the next one, improved precision of test duration measurements.

00:36:03 So I know I can get these if I say like durations ten or something like that.

00:36:09 I think that does is actually, I don’t know what that does.

00:36:13 I think that’s the 10th lowest test will show up.

00:36:16 Okay.

00:36:18 And then you also have to pass it in BV to actually show the durations. I think are the durations used for something else other than this?

00:36:30 No, it’s just for displaying on the command line. I think you can access them in plugins. I don’t really remember. But basically we improved from having only I think it was 10th of a second. And now there’s like microsecond resolution.

00:36:43 Okay, you can better, like, it doesn’t say anymore. This took 0 second, which was not necessarily super useful for most people.

00:36:54 0 second.

00:36:55 It’s fast, it’s super speedy, but yeah, it would have been like 300 milliseconds and you just wouldn’t have known.

00:37:00 Okay, next up is pretty cool.

00:37:04 The rich comparison of data classes and address classes is now recursive. So if you’ve got classes with classes with classes, which usually do. Right, yeah.

00:37:18 So it’ll go down the line and compare them down.

00:37:23 So one, very cool. And two, I was surprised it didn’t already do this.

00:37:28 Yeah, it was just like a little thing that was like, well, we’ll do this if people find this useful. And we weren’t sure whether we’re going to keep the rich comparisons for these, but people seem really excited about this. And actually someone, an external contributor is actually working on adding this for name. Tuples. So we’ll have another way to have one of these data class like things have a rich assertion support.

00:37:53 Okay, yeah, that’d be good.

00:37:55 I use named. Tuples all the time.

00:37:57 Yeah, I love name tools.

00:38:00 I use them basically all my projects. But yeah, I’m excited for this as well.

00:38:07 I’ve been trying to use the classes more, but name. Tuples are more supported by more versions of Python.

00:38:19 I kind of like the next one.

00:38:22 It’s a little bit of an oddball is that if you type version, you used to get a whole bunch of stuff and now you just get the version of pytest, and if you want all of the other stuff again, you add another version.

00:38:40 Yeah, this is my fault entirely my fault.

00:38:45 This is based on the version convention that the Python command uses, specifically Python three, where if you pass this version, it gives you just a very simple version. But if you do version twice or three or four, however many more times, it will give you more information.

00:39:02 The story behind why we did this is actually kind of interesting, though, in older versions of pytest. If you had a plugin, actually, there were kind of three things that caused this. If you had a plug in that crashed on startup, you couldn’t actually ask pytest what version it was. And so people would make bug reports and they’d be like, well, I don’t know what version of Pytest I’m using because Pytest version is crashing.

00:39:25 Another thing that would happen here. And this actually ties into the required plugins. And like the unknown configuration option thing that we saw before, pytesthish version would also trigger all of the additional command line flags that you may have set in your Ini file and this could cause pytest to not actually spit out the version when you meant it to.

00:39:46 What’s the third reason? I don’t remember the third reason. But anyway, dash version now no longer goes through plug in initialization, so it’s just going to spit out the version and basically should never fail. You know, knock on wood and version version will give you the old output, which does the full plug in initialization and stuff.

00:40:08 Okay. Actually, so that’s even more cool than I realized.

00:40:14 The shorter version doesn’t do as much work then and it will run more of the time.

00:40:20 Yup.

00:40:22 It should help us with bug reports too.

00:40:27 Okay, that’s really good to know then. That’s great. And there’s people all over the place that are supporting one or two of the people on the team that are people supporting the rest of the team, and it’ll help them too. Well, did you install Pytis correctly? What version are you using? I don’t know. It’s crashing.

00:40:49 Yeah. Okay, cool.

00:40:52 The next one. Also, I care about this. I don’t know how many other people do.

00:40:58 The JUnit. Xml now includes an exception cause in the message XML attribute for failures during setup and tear down.

00:41:14 Before this, if it was a failure happened in setup for tear down or basically one of the fixtures, actually, I don’t know this. Does this apply to fixtures as well?

00:41:29 I don’t know if she’s Jane and XML, I’m not really sure. I think it also applies to fixtures.

00:41:33 Okay.

00:41:36 This is definitely was a problem for me. So the reason why I use JUnit. Xml is it’s one of the ways to get an XML output that Jenkins can read.

00:41:52 It’s always for Jenkins or other CI providers. This is why I actually don’t know anything about it, because I’m not using Jenkins at the moment.

00:42:00 Yeah. So I’m in that corner of people that both use Jenkins to drive by test and also we push a ton of functionality into fixtures.

00:42:11 So I like to have as much work as possible in fixtures so that it’s only really the thing that I care about that’s in the test.

00:42:24 Yeah, that makes a lot of sense.

00:42:26 And when you’ve got very complex fixtures, there’s lots of failures that happen there.

00:42:32 So having that information there, that’s a cool thing.

00:42:37 Cool. Again, that’s the end of my list of improvements.

00:42:43 Any others you wanted to add?

00:42:44 I mean, I was mostly going to talk about the pipe and the version thing, so I’m glad we covered both of those.

00:42:52 One of the things I really love that is included in the change log is a section on improved documentation, because often I think it’s getting better. But documentation is massively important for open source projects and for all software projects. Really having some calling out the improvements in documentation I think is a great thing.

00:43:20 I actually really love this section because it gives us the opportunity to commend the people that are contributing non code to test and code them a way to be like, hey, look, I did this thing in this big open source project and give them a shout out in the actual change log.

00:43:37 Yeah. And actually there’s one here that I forgot to add, which was apparently there was in the startup getting started with this sort of stuff. It wasn’t really talking about the Q flag for making we have dash v for verbose, but we have Q for be more quiet. And apparently that was missing in one of the bits of documentation.

00:44:08 And then I went and looked at the change and it’s like one line. It’s like a very small change to the documentation. But when people are starting out, that one little extra thing is important.

00:44:18 Yeah. Definitely reduces confusion by adding that one little line.

00:44:23 And then a couple of things I liked was I haven’t read these changes yet.

00:44:31 One of them was explain indirect parameterization and markers for fixtures.

00:44:36 So is that two things explain indirect parameterization better and explain markers better.

00:44:43 So it’s kind of both. They’re also, like very closely related the changes mostly about making indirect parameterization less confusing. I’ll be honest, I’m still confused about indirect parameterization. So this documentation change, actually, I was like, oh, that’s how it works. Okay, I get it now.

00:45:03 I think I read it actually just a few months ago where I had a question at work about indirect conversation. I’m like, okay, I got to figure this out. So I spent like 3 hours on a weekend and just played with indirect parameterization until I felt I was comfortable with what they did and like, oh, okay, I finally get it. It’s really not that complicated. It’s just hard to explain.

00:45:28 And I’m not going to try to explain it here, so maybe I’ll do that.

00:45:31 That could be a whole other episode, right?

00:45:33 Yeah.

00:45:34 And then there was a note about dash strict and strict markers and the preference for the latter one.

00:45:45 What does strict markers do?

00:45:46 So it makes it so if you use a marker in your test and you don’t have that defined in your configuration or a plugin has not advertised that marker, it will be an error. This is to help you not have typos in.

00:46:00 Otherwise markers would just silently allow those.

00:46:04 The reason that we’re documenting the difference and the preference for the latter one here is we’re planning to make strict turn on all of the strict options in pytest in a future version. And I think that’s a breaking version change in Python Seven. So whenever that comes out.

00:46:23 Okay.

00:46:24 But right now, strict accidentally does prefix matching on some other option. And so we’re going to be changing the meaning of that option. So we wanted to document that. You should spell it out in full for now.

00:46:37 Okay.

00:46:38 Yeah. We should probably get together all of the different strict options.

00:46:43 So. Yeah, having one that turns them all on is probably a good thing.

00:46:50 Okay, so if you don’t declare marker in config file, it still does a warning right now.

00:46:58 Yeah, but this will turn it into an error.

00:47:00 Okay, yeah, the warning, actually.

00:47:04 That’s one of the things I have to explain to people that are reading my book, because that wasn’t the thing when the book came out.

00:47:13 Brand new warnings. Yup.

00:47:15 Yeah. Anyway, so good changes to documentation. I love that. And I love calling it out. And I would love to have more people jump on and try to help improve documentation. I didn’t see a whole bunch of stuff in the bug fixes. There’s a lot of bug fixes and there’s a few deprecations.

00:47:38 Most of the deprecations are for plugins.

00:47:41 I’m okay if we skip over those. There’s a lot of bug fixes in this change. Really? Although we’ve backboarded almost all of them to older versions of pytest, and so you’ll probably get them one way or another, whether or not you’re upgrading to Six right now or you’re sticking with the five four release.

00:47:58 But there’s a ton of them.

00:47:59 You’ve backported the bug fixes or backboarded the bugs.

00:48:02 Backported the bug fixes. Yeah, we reintroduced bugs in the older version, but yeah, we’re doing more of a like, support based release branching. We actually changed how we’re doing releases, probably since the last time we talked.

00:48:19 Okay. This gives us more flexibility to do a single branch development model and have your maintenance branches separate and backport changes to those.

00:48:32 Yeah, there’s a lot of bug fixes here. I don’t know if there’s any that are super. Net worthy, though.

00:48:37 So does that mean we’ll get more versions of like, bug fixed versions of 37?

00:48:45 I’m not sure. We haven’t done any old releases in a while. I think the only one that we did for extended releases was for the four six X branch, which is Python too, but I believe we’re stopping that at the end of the year. We haven’t really decided whether we’re doing previous minor release support yet, but we have the structure in place if we wanted to.

00:49:11 Okay, actually, I scrolled down too far, but even the five X might continue with some fixes then or something, maybe.

00:49:21 We’re not sure yet.

00:49:22 Not sure. Okay, why did I write this stuff down? So the breaking changes. We haven’t talked about that yet.

00:49:35 Oh, yeah, I know why I wrote this down. So one of the things that we actually talked about this right away, there’s a whole bunch of breaking actually, not a whole bunch of breaking changes. There’s just a few. Like you said, a lot of them are plugging authors who care more about but often I think the workflow, I think, is to write to the same workflow as everything else.

00:50:02 Download Six or use 5.4 .3 run your plugin tests.

00:50:13 Upgrade to Six. Run your plugin tests again. If everything is fine, you’re probably fine. If it breaks, then go and look at the separation list and see where this breaking changes list and see if there’s something that you care about. So the thing I’m highlighting, like a couple of them, one of them is the itest deprecation warnings are now errors by default.

00:50:39 So we do this every major release.

00:50:43 In the minor releases, we accumulate new warnings and then at 60, we turn those warnings by default into errors. And then the major version one release, we delete the actual functionality. So it gives you a little bit of extra time. If you needed to downgrade those two warnings to get a new feature in the six X release, it gives you some time to adjust to that.

00:51:06 Okay. Yeah. So every major release will have essentially this change log entry.

00:51:12 Yeah. And then we’re going to link too. And then there’s a special page that talks about deprecations and removals. So we’re going to link to that.

00:51:21 And like I said, I think for the most part it’s people that are plugins that are doing some fun stuff.

00:51:30 Yeah. We also deleted a lot of internal stuff. We found a lot of dead code that people might be using accidentally or incorrectly that we noticed, like when we Typed stuff or when we ran some dead code protection.

00:51:44 I’m looking at one right now that I didn’t list, but it’s removed the pytest dock test prepare content hook specification with a note that says this hook hasn’t been triggered for the past ten years.

00:52:04 It’s great. It’s code that hasn’t been run since before I started writing Python.

00:52:10 But there’s a hook, you could write a hook for this. It just won’t ever get called. And it hasn’t been called for a long time. So that’s funny.

00:52:18 This is one of the ones that we noticed were like, wait, what is this hook? Oh, it doesn’t actually run at all. Great.

00:52:26 Okay, so the next ones you’ve added, I think. Okay, one of them I understand this one, the tester run. We talked about this already.

00:52:40 The thing you get back, you can say partial outcomes. The outcomes list is now all the nouns are in plural form, like errors, passes, and stuff like that.

00:52:52 And some of them were non plural. If it was just one thing before.

00:52:59 Yeah. It was very inconsistent before. Some of them were pluralized, some of them were not. Some of them were sometimes polarized. And we’ve made it all consistent now.

00:53:07 Yeah. And yes, the English looks weird. I have one test or one passes or one failed failures.

00:53:19 Yeah. The English isn’t great, but it really if you do it the other way, you’re often writing like automated stuff to parse the outcome of that. And if that’s going to change the name all the time, that’s annoying. So I would say that’s not a breaking change. That’s an improvement.

00:53:39 But I guess we noted that people could be using this as an API and so it could technically what’s the Xkcd comic where pressing the space bar heats up the computer?

00:53:50 It’s one of those sorts of scenarios.

00:53:53 Okay, so the next one internals were rewritten to stop using Eval.

00:54:01 So this will change some of the indications of dashk, which is to select individual test based on a name and dash M to select based on markers.

00:54:13 The internals of this were pretty wild and resulted in a lot of really hard to read errors, and they were using Eval to figure out their actual values. And so you could do some really wacky code and end up with test matching that you wouldn’t really expect, or you could trigger arbitrary code as part of dashk. We wanted to one factor out Eval, but also make all of the error messages and functionality straightforward and easier to understand.

00:54:42 I’m actually really excited about this change because it makes dashk work a lot more sane.

00:54:50 There are ways where you could say, like, the wrong combination of and or, and the evaluation would jam those altogether and run every test when you didn’t really want that. But now there’s like a little mini grammar grammar that dictates how these two flags work.

00:55:07 Oh, that’s great.

00:55:09 Yeah. So if people aren’t familiar, K is the keyword flag, I guess. I don’t know why they call it keyword, but I think it’s keyword. But anyway, let’s say it searches for a string in the test name. So it’s not just the test name, it’s also the entire node name, right?

00:55:31 Yeah. And it’s also attributes on the test, which I learned recently as well on access.

00:55:36 Oh, attributes on the test.

00:55:38 It’s just undocumented right now.

00:55:40 Okay.

00:55:42 But you can say things like, let’s say I’ve got a bunch of tests that are I’ve added the name of the remote or something like that for my remote test. I can use that and they’re just in the test name, so they’re not like marked or anything. I can just say run all the tests that have remote, but I can say, oh, but not like, not the ones that also have this or there’s this logic in there, and then the dash M is the same sort of thing, but with markers.

00:56:13 So you can say I want to run all of my tests that are marked with like regressions marker, but I don’t want to run the ones that are slow or something like that if I Mark them that way. So I think you can do this. Can you combine K and M?

00:56:34 Okay.

00:56:35 Actually, dashk is one of those flags. This is one of the most important features of getting past the basics of Pytech. And once I understood how K works, it was super powerful. Like, for instance, you gave an example. One example that I use in the test suite for pre commit, for instance, which has a bunch of different programming languages, is I can just do K node and it’ll run the JavaScript test, for instance, I can do like Krubi and it’ll run the Ruby test.

00:57:02 Nice.

00:57:03 Super powerful.

00:57:04 Yeah.

00:57:06 Well, that’s the end of my list of things I wanted to cover for Pi test six.

00:57:11 I think we’re good.

00:57:12 Yeah, I’m excited. I’m excited about the future of pytest.

00:57:17 We brought it up a little bit. I wanted to do a special thank you. So thank you to everybody that’s contributed to PITUS in the past, including Anthony. Thank you.

00:57:30 Also, I really want to give a special shout out to Bruno Olivier because he’s been contributing. If you look at the entire history on GitHub, I know that the project was somewhere else. It was on big bucket or something before. But if you look at the contribution graph for the entire history of I test, if you look at GitHub, it shows like Holger because Holger started the project a long time ago way back.

00:57:59 But his contributions sort of dropped off at around the same time that Bruno is ramped up and Bruno has been contributing for a long time.

00:58:10 I think it would definitely be a different project without him for sure.

00:58:14 And then ran Benita, we talked about his contributions this year to the type checking. I think he also added some of the flags for like the no header flag.

00:58:30 Ray’s been doing a lot of stuff.

00:58:32 He’s been really kicking butt but this just started like I was looking at his history about last July. So he hasn’t been on the project for really that long and has been doing a lot of contributions. Very cool.

00:58:48 He’s been doing an exceptional job. I’m really impressed with Ryan.

00:58:51 That’s great.

00:58:53 But there’s so many more. You look at the contribution list. It’s a big list and I was just talking about commits but there’s people that help submit bugs that do testing, the beta testing of the RCS and everything else. All that stuff is important. It’s a really cool community and thank you, Anthony, for coming on the show today.

00:59:18 Yeah, no problem. Always happy to be on the show and always happy to work on Python.

00:59:26 Thank you again, Anthony, for joining me today and thank you Datadog for sponsoring visit testandcode.com Datadog to get started and thank you to all the listeners that support the show through Patreon join them by going to testandcode.com support. Those links and links to the change log and a couple of other things are at our show. Notes at Test And Code.com 125 that’s all for now. Go out and upgrade to Python 16. Go out and test something.