pytest is awesome by itself. pytest + plugins is even better. In this episode, Anthony Sottile and Brian Okken discuss the top 28 pytest plugins.


Transcript for episode 104 of the Test & Code Podcast

This transcript starts as an auto generated transcript.
PRs welcome if you want to help fix any errors.


00:00:00 pytest is awesome by itself. pytest plus plugins is even better. In this episode, Anthony Soteli and I discuss the top Pytest plugins. According to download counts, it’s a lot of fun. And I know you’ll learn a lot. I know I did it.

00:00:15 Thank you, Oxylabs, for sponsoring this episode. Oxylabs, a top provider of innovative services including real time crawler, web scraper, and residential and data center proxies trusted by more than 500 companies. Find out what they can do for you at Oxylabs. I welcome to Test and Code.

00:00:48 A podcast about software development, software testing, and Python.

00:00:59 Today on Test And Code, we’re going to do something fun, and I have Anthony Satilli, you’ve been on before. Thank you for coming on again. And I had this crazy idea to just kind of talk about pytest plugins. Well, first of all, welcome, Anthony. Thanks for coming on the show.

00:01:13 Yeah, glad to be here.

00:01:15 pytest plugins are definitely one of the cool parts of pytest. There’s other tools that have plug in capabilities like coverage and flaky that I think the plug in ability makes them even more popular. And I think that’s definitely the case with Bytes as well. It would be powerful without it, but it’s very powerful with it. I know how to write plugins. I’m not sure where.

00:01:37 Do you know if plugins were just part of Python from the beginning.

00:01:40 That all predates me a little bit. But as long as I’ve been with the project, pytest plugins have been a big part of how pytest works.

00:01:49 Some of the plugins, I almost forget are really separate that you have to download because some of the functionality of pytest that I think of as built in, really, they’re built by plugins. So Python ships with a bunch of plugins already, right?

00:02:04 Yeah. Internally, pytest actually influenced a lot of its functionality by internal plugins.

00:02:10 Like the nose stuff. It can run unit tested nose, and both of those are written as plugins.

00:02:17 I think, as well as like the logging and warning stuff, that stuff is all done by plugins.

00:02:23 So I can’t remember. I think I went through a couple of Hoops to get this, but there’s IPI stats and then there’s something else I think that I grabbed this from, but I just grabbed search for pytest. I think I took the top 4000 Pi downloads of the last year and then searched for just pytest in there to try to find the plugins. Because usually if people are nice, they’ll name their plugin either BYTest something or something. Bytest.

00:03:00 I don’t think that’s a rule, but it’d be weird to not because how would people find it?

00:03:06 Yeah, hopefully we’re not missing any because of that, but I assume we’ll get most of them.

00:03:10 And I know that download counts aren’t the only thing, but after looking at this, I think actually the top 20 or 50 are a fairly decent representation.

00:03:22 So I think we should just jump in I don’t know how many we’re going to get through. We’re just going to start doing it, but doing it until we either run out of time or either Anthony or I get bored with it.

00:03:35 I’m bored already now.

00:03:38 Oh, good. I’m sorry. Now why don’t I just introduce the first one? It’s PITest Cove, which is a plug in to help you run coverage. What do you think of this one?

00:03:50 Yeah, I actually think this plug in is very good for Bridging coverage and Pi Test, which are two tools that they’re almost born for each other, like coverage and testing.

00:04:00 I myself, I tend not to use this plug in all that often because you can get away with just vanilla coverage. But if you want one command that does everything, hightest cover is the way to do it.

00:04:11 One of the recommendations from coverage, or at least it was for a while, was you want to be able to capture even the startup of something.

00:04:21 And so using a pipe test plugin might get it so that you missed some of the initialization of your test stuff. But the pytest code plugin has worked fairly hard to avoid that. So I don’t think this plugin has that problem. The other bit that all the flags whether or not the flags keep up. So the Pite Test Cove, if there’s some of the flags that you want to use from coverage that aren’t supported by Bytes Cove, there’s that issue.

00:04:52 So the other thing is, if you’re mostly running your coverage from talks, this plugin really doesn’t buy you much. You can just run. So it’s either pipest runs coverage or coverage runs by test, right?

00:05:06 Yeah, pretty much.

00:05:08 Okay.

00:05:09 Interestingly. There’s some work that I believe is in progress, but is probably closer to them than it was before to actually merge a pytest plugin directly into coverage, and then we wouldn’t need this plugin anymore because coverage we just supported out of the box.

00:05:26 That would be cool.

00:05:28 But I don’t remember what the status of that is with. I know Ned was starting to work on it, but I don’t remember the status of it.

00:05:35 And then there was some talk about just trimming down the coverage so that it wouldn’t really deal with a lot of the settings and flags or anything at all and just run like some simple things. But I don’t know, I’ve tried to stay out of that discussion.

00:05:53 Yeah. The cool thing about both approaches is, like pytest Cove hooks into the process early enough that both your test discovery and Runtime will get instrumented by coverage.

00:06:06 Yeah. If I’m using pytestco, I’ll throw the command as to where the source is and all those you have to pass in some flags. I’ll throw that in the pytest in e so I don’t have to type it every time.

00:06:18 True.

00:06:19 Anyway, okay, next up is pytest timeout. Actually, I’d forgotten about this. I really need this right away.

00:06:29 Yeah, this one’s pretty great. I use this one mostly at work with some tests that are supposed to finish in a certain amount of time, but sometimes due to unknown circumstances, may start running away or looping forever and we can’t quite solve the halting problem. So this is a good plug in to put in place to just say, like if a test is running for 60 seconds, it’s probably never going to pass. We’ll just kill it now and Mark it as a failure.

00:07:00 Yeah.

00:07:02 I have some long running suites that we run over the weekend against a bunch of hardware and stuff, and we could even use this for even big timeouts. Yeah.

00:07:12 And another thing that we use this for is we have a 90 minutes timeout for Jenkins at work, and if something is just spinning and not using resources for 90 minutes, that’s 90 minutes of CPU time that we could be spending on some other project. And so this is a nice way to kind of prevent those runaway jobs.

00:07:30 Oh, yeah. Okay. What have we got next?

00:07:33 Next one we’ve got is Pi test exist, which is what I like to refer to as the easiest hammer to make test faster.

00:07:42 The easiest hammer.

00:07:45 So what XDS does is it makes it really easy to take an existing Pi test suite and spread it across a number of processor cores to kind of naively paralyze your test suite.

00:07:59 There are some cases where this doesn’t work all that well.

00:08:03 I know. In particular, the My Pi test suite doesn’t deal well with exist due to having a very large number of very small tests.

00:08:12 But for an average project that seems to improve your performance out of the box.

00:08:19 Okay, there’s a flag that you can pass in for having it. Just pick the number of CPUs and doing a decent job.

00:08:25 Yeah, I believe it’s dash n auto auto.

00:08:28 Okay. Yeah, that’s cool, because I wouldn’t know what to pick.

00:08:34 Usually it tries to pick based on the number of CPU cores you have on your computer, although I know that doesn’t work all that well in Travis CI and such. So there’s special checks for like, I’m on Travis CI, so there probably aren’t actually 36 processors. We’re going to pick a smaller number.

00:08:53 Yeah. But even it speeds stuff up. Even if you just throw it on like four cores, it’ll speed it up.

00:09:02 I was curious about it a lot of little small tests. And if your suites are already pretty fast and you have a small number of like, let’s say if you’ve got four tests, it might not be really that much faster to split it up onto four because there is some overhead in combining it. Right?

00:09:21 Yeah, it’s probably actually going to be slower in that case.

00:09:24 Yeah.

00:09:26 Like any optimization, it’s good to measure before and after to make sure you’ve actually made things better.

00:09:33 And there’s some other stuff you have to look out for with exist because it is actually forking your interpreter and running it in some processes. And so sometimes tests may have some global mutable state that doesn’t really work all that well when you run them in parallel or run them in a different order. So often you have to look out for that.

00:09:55 Thank you to Oxylabs for sponsoring this episode. Oxylabs is a top provider of innovative web data gathering services such as Realtime Crawler, Web Scraper and Residential and Data Center Proxies.

00:10:09 Oxylabs is now introducing their next generation Residential proxies, which are a significantly improved data gathering solution. They provide a stable and Fax proxy pool with more than 30 million global IP addresses, and they are resource efficient with the proxy management, user agents and IP rotation all done on the Oxylab side. Oxylabs has a deep understanding and knowledge of how to acquire web data, and they provide a dedicated account manager for every client already trusted by more than 500 companies. Visit Oxylabs IO Test And Code to find out more about their services and to apply for a free trial of their next generation residential proxies. That’s Oxylabs IO testencode next is Pitess mock, and I kind of like this one.

00:11:05 It’s definitely a convenience one, because mock, you can use it without a plug in you can use by Testa mock together, but the clean up is a little you have to make sure you get the cleanup right.

00:11:20 What happens if you don’t unlock something?

00:11:24 Well, that’s your classic case of test pollution when you suddenly changed how a global function works and then you don’t undo it at the end.

00:11:33 Yeah.

00:11:36 And that’ll happen, right. So if you replace some functionalities like a monkey patch, which is unfortunate, that there is a monkey patch fixture within pytest, which actually works pretty good, and I like it. But now when I want to say I’m monkey patching something, does that mean monkey patching or using the monkey patch fixture?

00:11:58 Right, yeah. The thing with pytest mock is it’s slightly more powerful than the monkey patch fixture, and so often you get a little bit more flexibility and a little bit more feature set out of it.

00:12:10 Yeah. And the convenience if you get this mocker object and you don’t have to clean up afterwards because it’ll clean up after your test.

00:12:22 Which is good and bad.

00:12:24 Actually, I’m sometimes a little bit hesitant about using the mocker fixture because the cleanup time is not necessarily well defined. It’s supposed to be right after the test ends, but due to fixture ordering, it might end up being slightly after that.

00:12:40 But yeah, it’s one thing to look out for when you’re using it.

00:12:43 What would you do instead? Would you create a contact manager? Yeah, context manager. That’s it. Yeah.

00:12:50 So I’ll usually use the Context manager protocol with mock directly, although sometimes I’ll write my own fixture that does that.

00:12:57 Or use a yield fixture or other stuff like that you would just do the mocking rate in the test.

00:13:02 Then it doesn’t always scale all that well, but it seems to work well enough for me.

00:13:07 Next on our list is something that surprised me that’s in here it’s pytest runner.

00:13:15 This was something that you needed to do use if you’re going to run pipes code from your set up top by. So if you say like Pythonsetup pytest and you want that to call your pytest, you needed this. But we don’t want people to use this anymore, right?

00:13:38 Yeah. For the most part, setup tools and related PIPA projects have been moving people away from Pythonsetup Pi thing setup. Py test has been deprecated for almost a year now, maybe a little bit more than a year. And with that deprecation, it made sense for Pi test to also deprecate that approach as well. And most of the reasoning behind that is because it was very different from how Pip installs things. It involved eggs, all sorts of weird technologies that have mostly gone away and encouraging people to use virtual environments and Pip to install stuff instead.

00:14:20 Yeah, I haven’t used this for a long time.

00:14:22 Yeah, I’m not surprised. It has like over 11 million downloads in the last however long.

00:14:28 That doesn’t surprise me all that much because the deprecation has been relatively recent, and I expect it to live for quite a while until it’s eventually phased out.

00:14:39 Okay, so the next one is pytest Instafail. Have you used it?

00:14:45 Yes, I’ve used it a couple of times.

00:14:47 It’s pretty cool when you’re working on tests interactively and you have a whole bunch of failures and a really long test suite.

00:14:55 It will spit out failures as soon as they happen. So what I often do, or a workflow that I often do is I’ll open one tab of my terminal. I’ll run Pi test with Insta fail, and then as the tests are running, it might spit out a failure and I’ll immediately start trying to fix that failure. I won’t wait for the full test suite to end.

00:15:14 And Besides some slight advantages over vanilla Pi test, because pytest usually waits until the whole test suite is run to spit out any of the failures.

00:15:24 When you say spit out a failure, pytest is telling me if I do V, for instance, it will tell me which test is running and whether it passed or failed.

00:15:34 Right? But it won’t give you the assertion message or what stack trace ended up, but Instafail will show that immediately.

00:15:41 Okay, so that makes it so that the stack trace is showing up right away.

00:15:46 Cool.

00:15:47 Sometimes I’ll instead use Max fail one so that I’m just like running a test suite until it hits the first failure.

00:15:54 But this at least lets me see all the failures at once and give me that early feedback.

00:15:59 Do you really use Max fail one all the time?

00:16:02 Yeah, because you don’t like typing X I can never remember the short options, and there’s so many of them.

00:16:11 Okay.

00:16:11 But then again, I can’t remember whether it’s Max dash fail or Max fail. One word. So I should probably just learn the short Ops next one’s.

00:16:20 Not surprising to me. I don’t know much about it. Pipest Django, but I have talked with Django people that say this is just the way to go. It’s how you can easily hook Django testing with dojango testing with pipe test. So can’t really talk much about it, but cool. That it’s there.

00:16:41 Yeah.

00:16:45 Pipest HTML, I have used this. I love it. I’m not using it right now, but I do really like it.

00:16:53 It is a project out of Mozilla, and it allows you to generate an HTML report for your test suite run.

00:17:05 And it’s actually kind of incredible how much stuff you can do with this to the point of you can have screenshots, even you can have grab whatever, do a screen grab and throw that into your test report. That’s pretty neat. And other stuff. There are other ways to add data and information to the report.

00:17:35 Yeah. I actually hadn’t seen this one before, and so I went and installed it and tried it out. And the reports are actually super useful and they look pretty good.

00:17:44 Yeah. And there’s an option.

00:17:47 It’s a fairly good JavaScript thing where you can filter some of the results. If you’ve got a big suite you can filter, just look for the failures, look for the passes, stuff like that. You can look at the time.

00:18:03 One of the things that’s nice is it does report how long it takes, how long things are. The different tests are running.

00:18:10 On episode 25 of Test and Code, did interview Dave Hunt and he’s one of the people on this project. We talked about it a lot.

00:18:22 This actually ties really nicely into the next plug in, which is Pipest metadata.

00:18:29 And I learned about it because when I was using pipest. Html, we have stopped using HTML for oddly enough, but we continue to use metadata because it allows you to add extra data to the output, which is where you can stick the output into like your J unit XML output, so that data can be seen by your continuous integration server or something. So it’s neat. We use it. Also, we’re storing our test results into an external database of our own design. And a lot of that extra data is collected, like which devices were running our tests on and what version of the software we’re using and things like that can get thrown in there. So they’re easily pulled out.

00:19:27 Yeah. I hadn’t actually seen this plugin before, but man, I wish I would have. Like two and a half years ago we were setting up a Selenium suite and it would have been very useful for annotating the output, but now I know about it.

00:19:43 Yeah.

00:19:45 By default it throws a few extra full like a few versions, a few metadata items. I can’t remember what they are. Which operating system you’re running is one which I’m always using the same operating system, so I usually remove that, but I can see why that would be important for Mozilla.

00:20:05 But for me, you can add your own, so we definitely had stuff, and you can also it’s already built in to have some command line flags, so you can either add data at Runtime or you can add data from the command line as one of the flags, so that’s cool.

00:20:22 Oh, neat. Thanks.

00:20:25 Thank you, Springboard, for sponsoring this episode. If you want to learn to write machine learning algorithms, want to learn how to build and deploy a deep learning prototype, or want hands on experience in deploying a machine learning model into production, then check out Springboard’s Machine Learning Engineering Career Track. It’s like an online boot camp, but way more project based learning, and you’ll work towards creating your own portfolio of machine learning models. You’ll be paired with a machine learning expert who provides unlimited one on one mentorship support throughout the course. The program was built for software engineers, so to be eligible, you must have at least one year experience coding with an objectoriented language such as Python, C or Java. Test and Code has partnered with Springboard to exclusively offer 20 scholarships of $500 each to eligible applicants. Keep in mind, scholarships are awarded on a first come, first serve basis, and you have to use Code AI Springboard. Check out if you are eligible by going to Springboard.com. Applying is free and only takes ten minutes.

00:21:27 Next is Pytist Async. Io.

00:21:32 Not surprising. This one is this high up.

00:21:34 Yeah, Async is pretty hot.

00:21:37 So. Yeah, actually this kind of tells me that people are actually testing their Async code, so that’s cool.

00:21:43 Which is impressive because it’s pretty hard to test Async code.

00:21:48 Yeah. So actually, just because of this plugin exists, I don’t have any Async projects right now, but I might be one of those people that artificially goes out and creates a reason to use a feature, and I think this is a good reason to try to come up with some Async project and build something to learn something.

00:22:10 That’s always my approach.

00:22:12 Yeah.

00:22:13 The next one is one of the reasons why I’m glad we’re doing this list.

00:22:19 Me too.

00:22:21 So pytest Split tests.

00:22:26 This has some fairly long winded flags that you have to pass in to make it work.

00:22:33 However, what it does is it split. So you’ve got a long test suite or a test suite with a bunch of either it’s long running or essentially it’s just the number of tests you get a bunch of test and code. Want to run a portion of it?

00:22:49 Yeah. I actually don’t know how to do that without, like, manually before, I would maybe, like, run different setup, split them up into directories, and run like a directory at a time or something.

00:23:01 But this just like chunks it up. So let’s say you got like 100 tests or 100 test cases.

00:23:09 You can Chuck it up to say do ten at a time, and in this instance, just do the first ten and run that and then like the second ten or something.

00:23:22 This would be really great if of course, we want isolated tests, but sometimes with fixtures and stuff, we kind of have to fudge the isolation a little bit to save time and you still get like, okay, so one of my test is mucking up the fixture, so you kind of have to break it up. And that’s where I would use this definitely is to try to debug a large test suite that the whole thing fails, but it fails at different times, and yet every time one of them fails and I rerun that test, it passes. This would be a good way to help debug, that for sure.

00:24:01 Yeah. The one use case that I saw for this that seemed really good is like, for example, GitHub Actions gives you, I think it’s 20 parallel workers, which is insane that’s such a large number of parallel workers for free CI.

00:24:16 But often I’m struggling. Like, how would I use all 20 workers on my workloads? But this would be like an easy way to take some test suite that has independent test and code, could just split them in half or split them in thirds and use as many of those workers as possible.

00:24:35 Wow.

00:24:36 And that would be easy. You don’t even have to split them up into director or anything.

00:24:41 Just add the flags and you’re good to go.

00:24:44 Yeah.

00:24:45 Interestingly. I used to work on another test runner called Testify, which is now defunct and dead and Pi test lives on.

00:24:54 But in Testify we had a similar feature to this, but it was really, really difficult to get right. And I’m glad that this plugin gets it spot on for podcast.

00:25:05 Yeah. Cool. I’m definitely going to start playing with this right away.

00:25:09 Okay. I’ve got mixed feelings about this next one.

00:25:13 About the next one. Me too.

00:25:15 Really? Okay.

00:25:17 Yeah. So the next one is pytest Sugar, which actually we have a few plugins that are similar in nature to pytest Sugar.

00:25:26 We might talk about them later, but another one is Pi test emoji.

00:25:30 And what this does that changes.

00:25:33 I think Pi test emoji is a little bit simpler than Python Sugar. Python Sugar changes a lot of the way that pytest outputs, and in some ways I think it makes the output a lot better.

00:25:45 It certainly compacts, it a lot and gives you better signal onto the pass and fail with these nice little check marks and other little things and little improvements.

00:25:56 But for me it’s just very jarring. Coming from the default high test output and getting used to the alternate display is a little bit different.

00:26:06 What are your thoughts on it?

00:26:08 I personally don’t like it and I don’t know why I don’t like it.

00:26:13 So normally if you just run by default by test, it’ll dots for the passes and F for failures. And this one does check marks and XS instead.

00:26:27 And the percentage, it used to be more useful than before. So back in the day Pytech wouldn’t give you a percentage of output. And now, as you can see, the output of your different test files running it’ll tell you roughly percentage wise, how much is left.

00:26:49 Like you’ve gotten like 10% done or 40% done or something. And I just sugar would give you that. Plus a bar chart.

00:26:58 Oh yeah, there’s the progress bar. I forgot about the progress bar.

00:27:00 Yeah. So it kind of did like this progress bar, but it sort of looks because it draws it one line at a time. It ends up being sort of like, I don’t know, it’s like a Christmas tree effect.

00:27:15 It’s going to say the same thing.

00:27:17 It gets bigger, but you don’t know where it’s going to end. So without the percentages, it doesn’t give you much, but so it’s just sort of kind of neat to have. I don’t know.

00:27:28 I think they’ve improved the progress for a little bit since last time I tried it, so it doesn’t end up with that Christmas tree. But yeah, I remember when the Christmas tree was there.

00:27:37 One of the things I do like about it is just a demonstration that you can muck with the output of pytest. I think the code for pytest emoji is a little more clearer, but Pytest emoji was written as an example for people to learn how to change the output. So that’s different goals.

00:27:55 I didn’t know that about PYT emoji.

00:27:57 But now I know pytest poo is probably my favorite, but it’s deprecated.

00:28:03 Have you used Pinterest poo?

00:28:05 I have not.

00:28:08 And I think it was just built as a joke, but I’m bummed that it doesn’t work anymore. So maybe I’ll have to do like a pull request.

00:28:15 I can’t get that working again.

00:28:17 Which just ran like normal except for all your test failures would do the Poo emoji instead.

00:28:26 Gosh got to love that. Next is pytest rerun failures. I got to admit, we use this.

00:28:32 We do too.

00:28:34 So this is for Flaky test. It’s one of the things you do right after a test fails in a big suite. You run it by itself to see if the next time you run it, it’ll pass. It gives you some information. So pipe test rerun failures. It lets you just do that automatically. Any test that fails, it just runs it again. I think you can do multiple times or give it a limit. We usually just give it one. We want to at least run it one more time to see if maybe just something was weird about the network or something.

00:29:04 I think we run it with three, which is just like a sad admission, but it’s life.

00:29:11 We’re pragmatist. It’s good to just have that data right away.

00:29:15 In a pure world, I would rather not use it, but it’s convenient.

00:29:21 I don’t even know if it really saves us time. I think that every time I views this, it just fails three times in a row anyway instead. So that’s actually making our test suite longer. But at least we know that it wasn’t just something weird.

00:29:37 Yeah, we applied it to a suite that had a couple of 101% failure test and code.

00:29:43 It would sort of pave over those problems with retries and seem to work well enough to make the suite more reliable.

00:29:51 I mean, I think if I remember, it doesn’t completely hide that something failed.

00:29:57 It somehow tells you, yes, there were reruns and whether or not it passed the second time or not.

00:30:01 Yeah. It gives them a special status.

00:30:03 Yeah.

00:30:04 Okay.

00:30:05 Bytes En V.

00:30:08 This is probably one of my favorite plugins and probably the one that I use the most.

00:30:13 What does it do?

00:30:14 So it’s a very, very simple plug in. I think the plugin is only like ten lines of code, but it basically takes a configuration file that lists a series of environment variables so things that you would put into OS environment and it just consistently sets them during your test run or unsets them if you need like a specific I want to fake that I’m in the production environment, you’ll just set a constant service underscore environment equals production and then whenever you run your test that will be automatically set for you. And before I realized this plugin existed, I was doing a similar thing with Talks and the set end option there.

00:30:56 But the problem with Talks is set in is if you run Pi test outside of talks, then you don’t get that you have to remember to set all the same environment variables, or you set up a manual fixture that sets them and unsets them by yourself.

00:31:10 But this is just like way better for not having to think about that.

00:31:13 That’s cool.

00:31:15 One example, I do a lot of stuff that deals with Git repositories and running Git, and in order to make a commit in get, you need a minimum of four environment variables coming from the blank system to say what the user is and what their email is. And so I just have this nice little copy and paste blob that sets the user to Anthony statilli and the email to something at example.com, then I don’t really have to worry about using get inside of my test suite.

00:31:47 Wow. Okay, that’s cool. Nice.

00:31:50 Next is Bytes Cash.

00:31:52 Unfortunately, it’s been included into Core, so it’s no longer needed to be installed separately, but it allows you to cache some data about the pytest session such that you can look at a previous run or prevent computation of some complex data or stuff. Like that. pytest actually uses this internally for only rerunning the failed tests if you’re using LF for last failed test.

00:32:25 Yeah.

00:32:26 And so that’s actually implemented using the cache plug in.

00:32:30 Okay, that’s funny, because I actually wrote about using the cache in the Python book, but I forgot that it used to be a plug in because it’s just built in. Well, I mean, it’s still a plugin. It’s just shipped with pytest now, right?

00:32:45 pytest Flask is next. That is also not surprising. I’m guessing it’s similar to pytestjango in that it helps you test Flask stuff, but I don’t think I’ve used it. So I’m not really writing a lot of Flask.

00:32:59 I haven’t either, and I have written a lot of Flask, so I’ll have to check it out.

00:33:03 Okay. Using pytest to test your Flask?

00:33:06 Yeah. Usually we kind of do two approaches that work for testing Flask applications. One is to take the views separately and just call them, and the other one is to set up the Flask has kind of a fake server where you can run an Http request in process and look at the response.

00:33:26 We do both of those approaches at Lift. I assume pytestflask makes it a little bit easier to set that up.

00:33:32 pytest benchmark is actually pretty cool. Have you used it?

00:33:37 I used it once, and then I decided that I was going to write my own benchmark suite instead.

00:33:45 It worked really well for what I was trying to do, but the output was not quite what I wanted, but it’s a pretty good tool for just writing a small amount of code that you want to spin a bunch of times and see how long it takes and compare that against other runs.

00:34:03 It makes benchmarking pretty easy if you’re trying to benchmark particular test code.

00:34:07 I think benchmarking is one of those things. I don’t think I’ve met a developer that hasn’t written their own benchmark code.

00:34:17 Oh my gosh, I should write a library for this, and maybe I should open source it. That’s why there’s so many benchmark lives out there.

00:34:24 It’s like USB. We’re going to have another one now.

00:34:28 By test ordering is next.

00:34:31 There’s actually a couple of plugins that are related to this one, but this one in particular is about picking forcing tests to run in specific order. If I recall correctly, there’s a couple of other plugins like Pipest randomly, which will intentionally shuffle your tests every time they run to try and expose test pollution.

00:34:56 Test A runs before test B and it passes. But if you run on the other order, it will fail and it tries to set out those solutions randomly.

00:35:08 We do use because it’s definitely good to be able to shuffle it around sometimes. The one of the warnings I give people is if you randomize your test suite and there’s failures that weren’t there before. Yes, you’ve found something that you need to fix but don’t feel terrible about it. It happens to all of us. Yeah.

00:35:26 Global mutable state. It’s so easy to happen and it’s sometimes hard to fix, but at least you can kind of detect it using these types of plugins.

00:35:34 pytest watch. I absolutely love it’s.

00:35:37 Great.

00:35:37 This one. It just watches all your tests and stuff, and if you change anything, it just reruns stuff. Yeah.

00:35:43 It’s especially good if you have a very fast test suite, so you can get that feedback almost instantly.

00:35:49 I was using this almost constantly when I was writing the pipe test. The next one we’re going to cover is Python Path.

00:36:00 This one just like raised my blood pressure.

00:36:03 Yeah. I was curious what your thoughts on this are.

00:36:08 I know I myself have had to fiddle with Python Path and test a few times, and I always hate to admit it’s like you kind of need to sometimes. But it also doesn’t surprise me how popular this plugin is.

00:36:29 If people aren’t familiar with what it is. If you run Python M pytest to run your tests, one of the things you get for that is Python will add the current directory that you’re in to the Python path so that all the modules in this directory are findable. This is important for your tests. So if the tests include a module, should they come from here or not? One of the problems with that is if you’re trying to write a package and test a package, you want to be able to test the installed package, not the current code. So there is some issues with that. I think that’s one of the reasons why pytest chose to not include the current directory in the Python path. But if you run Python by itself, it does not include the current directory. I actually like that feature. It annoys some people pytest Python path that plug in. It does a couple of things. One of the things it does and this is the advertised thing that it does. The advertised thing it does is it allows you in your test and code to define directories that should be included in your Python path.

00:37:41 The other side effect, which I don’t think it was documented last time I checked was it adds the current directory to the Python path also.

00:37:52 But I guess this is why the source layout of Python packages is so popular.

00:37:58 The empty string current directory path addition side effect is a super common pitfall and pretty easy to upload a broken package if you’re testing in a way that doesn’t reflect how your package would be installed.

00:38:14 Yeah. So the other way I’ve dealt with that also is like talks allows you to set a working directory.

00:38:23 So I just make sure that I set Talks to be in the test directory.

00:38:28 So I’m not in the parent of the test directory when I start. And so at that point you can’t see even if you were to run Python you missed that mistake.

00:38:39 I believe the settings changed her, but yeah, I’ve used that as well.

00:38:44 We’re going to do a couple more and then we might jump around pytest Flake Eight. This allows you to just run Flake Eight against your test. I don’t remember if it runs against your code also.

00:38:53 Yeah, it runs against your code also.

00:38:57 I’m really confused why there’s a bunch of plugins like this in this list. I’m really confused why this is so popular.

00:39:05 I guess I get like you want to run one command and then you’re done. You don’t want to have to run Python, run Flight Gate, and then run whatever other tool you have.

00:39:14 It just seems really weird to me to mix up Linting and testing into the same bucket in the same way. That Pi test benchmark also rubs me a little bit the wrong way because it seems a little bit weird to mix benchmarking and testing at the same time. But at least with that one I can kind of see some tests that could double as benchmarks. But this one, I just don’t really get it.

00:39:38 Can you use this plugin?

00:39:41 I’m actually thinking about it.

00:39:45 I’m going to look it up down the list. We’ve got Python Flake and Pepa and Pilot and what else, my pie.

00:39:58 I think that’s it here so far that are like these Linting things.

00:40:02 There is also Pytechest Black that I’ve seen as well.

00:40:06 Yeah, that would be surprising because Black changes your code. I wouldn’t want to change my code while I’m running. Maybe it doesn’t. Black has an option to not.

00:40:17 Yeah, it uses check mode. I think I remember right.

00:40:19 Okay, well, we have some Lenting at work for our test and code like to have more going on so that some of the things that we’re spending time during code review shouldn’t be spent in code review. We should have printers to check this stuff for us.

00:40:33 Of course, we have computers that are very good at nitpicking.

00:40:37 Yes, but it is not part of everybody in my immediate work group’s normal workflow to run anything other than just running Pythons.

00:40:47 But if we had our test suite that everybody’s working on, if some of these things like Flake Eight were already built in, then they would know they have to fix it because while you’re coding, you’d see, oh, my test failed because of Flake problem or something.

00:41:02 But that might annoy people too.

00:41:04 Definitely will annoy people.

00:41:07 Okay, so when do you want to annoy people? Because you can either annoy people while they’re running their test or you can annoy people like while they’re trying to check in their code.

00:41:17 Because a lot of people do run these sort of things with like commit hooks.

00:41:22 Do you run Flake Aid or anything like that?

00:41:24 I mean, I’m obviously biased because I’m an author of a Gatehouse framework, so that’s usually when I try and run them. But yeah, I use Flake Eight, I guess I’m also the current maintainer of Flight Gate, so I’m a little bit biased there as well.

00:41:39 But yeah, I use Flick Gate on all my projects. I use a couple of other lenders and code formatters as well, try and hand wave away these sorts of nitpick conversations during code review so that people can focus their time on architectural changes and stuff like that.

00:41:57 The Get thing is pre commit, right?

00:41:59 Yes.

00:42:00 Do you have everybody in your team running pre commit then?

00:42:04 Yes. We actually introduced this recently at Lyft, and I think we have somewhere around 600 repositories that have adopted this as their Linting framework, and it makes it really easy for developers to quickly get feedback about their code changes before they hit CI.

00:42:22 We also run it in CI to make sure that we’re still validating everything is correct and still catching bugs before production.

00:42:31 Okay. Do you feel like it’s better to have that catch after you thought you’re done with your changes and you’re trying to check them in? Is that the right time to be doing that?

00:42:41 I don’t know.

00:42:42 I think my opinions on the subject is I like to have this information as early as possible and as often as possible until I finally get to the change set. And so things like editor integration or get hooks are a good way to get early feedback. Obviously, you’re not going to get it like at test time like you would with this plugin, but I think as early as possible is good.

00:43:08 Okay, so let’s say I have like a failure and I have it on a precommittee.

00:43:16 Will it just not allow me to check stuff in then or commit it?

00:43:20 So by default it will kick you out of that commit and have you change it before you commit it. However, there’s other modes where you can temporarily skip something or you can skip the whole set of checks entirely.

00:43:31 Okay, so let’s say I’m working on my own branch and I just want to make sure that I’ve committed stuff before I go home or for the weekend and. Okay, I see there’s failures, but I want to push it or not push it, but commit it to my branch anyway.

00:43:45 There are ways to get around it then at least.

00:43:47 Yeah.

00:43:47 Okay. Yeah.

00:43:47 Git has the builtin Noverify, and in the framework called Precommit, there’s a skip option which allows you to skip individual hooks.

00:43:56 Okay, cool. All right, now we’re kind of running along in the list coming up. There’s a randomly and repeat sort of already covered. Randomly. It’s a good thing to use. What does repeat do?

00:44:10 Repeat is a really good one. I use this one a lot when so actually I’m working on my own text editor right now, which is a whole different story that I have a test suite for it that actually spins up the text editor, interacts with it, and then Like Runs A Sort Of Selenium Style Test Against It, And Some Of Those Tests End Up Being A Little Bit Flaky And pytest Repeat Has Been Really Useful For Finding Those Tests And Then Run Them 100 Times In A Row And Check If They Still Fail After I’ve Quote, Unquote Fixed The Flights. Repeat. Yeah.

00:44:46 It Basically Does One Thing. It Allows You To Repeat Test A Bunch Of Times.

00:44:50 Okay. So The Rerun Failures Only Reruns The Failed Ones Until It Passes. But this One Would Just Sort Of Run The Same One Whether It Passed Or Failed, Right?

00:45:00 Yes. The Way It Works Is It Sets A Parameterized Decorator On Every Test That Just Has An Integer That Goes From Some Number To Some Other Number. And So Any Test You Would Have Just Gets Repeated That Number Of Times.

00:45:13 Okay, cool. Now, In The Rest Of Our List, Is There Anything You Wanted To Cover That We Haven’t?

00:45:20 There’s One Other One That I Think Is Really Cool. Well, It’s Really Cool If You’re Working On Selenium Test, Which Is Pytz Selenium, But I Think That’s The Only Other One I Had From This List, I Guess. Freeze Gun. That One’s Good, Too.

00:45:34 So Freeze Gun, Like Fixes The Time, Right?

00:45:37 Yeah. It Lets You Have Specific Fixtures That Will Set The Time And Date To A Particular Point In Time So That You Can Test Particularly Gnarly Scenarios Like We Use This To Do The Old Problem Of Daylight Savings Time And Leap Second Bugs. You Can Actually Set The Time To Those Particular Frames And See How Your Code Reacts To Them.

00:46:02 Yeah. And Plus, It’s Got A Great Name. Freezegan Preskin. Yeah, I Think That’s A Lot Of Stuff That We’ve Covered. I Don’t Remember. I Wasn’t Keeping Track Of How Many The Top End The Top End pytest Plugins. I Would Love Feedback From Other People like To Hear Which Plugins Are Your Favorite, which Ones Are Good Things That Maybe We Should Pass Along, things that You’re Surprised Are Not On The List Because You Love Them So Much. Maybe Other People Need To Hear About Them, Too, so Let US Know. Thanks So Much For Covering These With Me, Anthony. Yeah.

00:46:33 Happy To Be On.

00:46:37 Thank You, Anthony. And thank You, Patreon Supporters For Continuing To Support The Show.

00:46:41 Join Them By Going To Testincode.

00:46:43 Comsupport. Thank You To Oxylabs And Springboard For Sponsoring This Episode. Check Them Out With The Links On The Show Notes at testandcode.com. The Show Notes Also Have The Complete List Of The 28 Plugins We Covered.

00:46:56 So You’ll Definitely Want To Check That Out. That’s All For Now.

00:46:59 Go Out And Test Something Or Not. Maybe Try A New Plug In To Supercharge Your testing.