Python’s unittest module, advanced features, comparison to pytest, future features, and more.


Transcript for episode 19 of the Test & Code Podcast

This transcript starts as an auto generated transcript.
PRs welcome if you want to help fix any errors.


[music] Welcome to Test and Code, a podcast about software development and software testing. Episode 19. In this episode I interview Robert Collins, current core maintainer for Python’s unittest module. I’ve been re-studying unittest recently, and I mostly wanted to ask Robert a bunch of clarifying questions. This is an intermediate to advanced discussion of unittest. Many great features of unittest go by quickly in this talk, so please let me know if there’s something you’d like me to cover in more depth, as a blogpost or future episode. A few of the topics we cover are: how did Robert become the maintainer of unittest, unittest 2 as a rolling backport of unittest, testing task parameterization with subtest and testscenarios, which extension to unittest most closely resembles Pytest fixtures, comparing Pytest and unitttest, will unittest ever get assert rewriting, and future changes to unittest. And we cover more things. A little note about audio quality. There’s a lot of great information in this interview. There’s also a weird rattle noise that’s in part of it. I’ve done my best to reduce it in post-processing, but honestly, I’m not a guru at audio editing and removing unwanted sounds. I think that the information is well worth putting up with this annoyance, but I want to apologize anyway. If you know how to fix things like this, contact me. Special thanks to my wonderful Patreon supporters. Visit patreon.com/testpodcast and you could become one too. If you are already a supporter, thank you. If you are not supporting, please consider it. We cover a lot of topics in the show, and I’m including links in the show notes at pythontesting.net/19. [music]

[Brian]: Hey Robert! [Robert]: Hey Brian, how are you going? [Brian]: I’m doing OK. I really appreciate you coming on. Would you introduce yourself to my listeners, and tell me about –a little bit about your history and background? [Robert]: So, my name’s Robert Collins and I’ve been a test-driven advocate and a sort of testing advocate in a more general sense for about sixteen years now, and, I dunno, just wherever I go I end up working on tooling related to or supporting testing, just as part and parcel of making software projects work. It seems so fundamental. These days it’s like fantastic, because nearly everybody has a baseline let’s say, we’re going to have tests, and we’re going to have unit tests, but I remember the bad old days where you had entire projects running mission-critical internet infrastructure that did not have a test suite. Or that had one that was broken, which is almost worse. [Brian]: So, what types of projects do you normally work on? Are they internet-related, or? [Robert]: You know, I can’t think of a project that hasn’t been internet related in some way, in the last, you know, however long. Most recently I’ve been working at HP on open stack, and that’s involved open stack itself, as well as working upstream of open stack in the ecosystem that open stack development depends upon. So the Python ecosystem, and related testing libraries and tools for that, as well as the packaging ecosystem, so pip, which is obviously used by a huge number of people, and it’s been fun getting changes into that. [Brian]: Oh, so you’ve contributed things to pip as well? [Robert]: Yes, so my fifteen minutes of internet fame were when I added the wheel cache to pip. So if you download say numpy or something, and compile it to install it to a virtualenv, and then you make another virtualenv five minutes later, if you are using pip 6, it will compile it every time and you will be going “my gosh Python is so slow! It takes five minutes to install what I’m using!”. And if you use pip 7 with the wheel cache, or maybe it’s 7.1, I forget the exact number, then when you do that the first time it compiles it and builds a binary wheel, and keeps a copy of that on your local disk, and the next virtualenv you make, it’s a one second operation to unpack the wheel onto disk and, all of a sudden things feel nice and fast and snappy. [Brian]: Does that work with even, if I’ve got like a local pip server. [Robert]: Yeah, because it’s a client-side cache. So as long as you are using the same user code, because the cache is in ~/.cache, as long as you are using the same user code and you’re not cleaning it out every time or anything like that, then yeah, absolutely. [Brian]: OK. That’s cool. [Robert]: Yeah. [Brian]: It’s on my to-do list to set up a pip server for our work group. [Robert]: Yeah, something like devpi can be really quite useful. And you can upload binary wheels back to devpi There’s the new thing, the mini linux spec, that was done, which with pip 8.1 I think it is, will honor that, and that will let you have wheels up on PyPI, so this binary wheel cache becomes a bit less important. So, if you’ve got stuff that can’t be cached for some reason, or you’ve got lots of different linuxes, then building a wheel of that spec and uploading it might be an advantageous thing to do. [Brian]: OK. Now I’ve got my wheels turning in my head. [laughs] Didn’t even mean that to be a pun. But, what I really wanted to get you on to talk about was unittest and related things [05.39] So, how did you come to be part of the unittest, I guess core developer, or only developer, I’m not sure. [Robert]: Yeah so I mean, unittest in Python has got a pretty long history. It’s had a bunch of different maintainers over the years, who have stepped up and said look, I’ll look after it for a while. I got into it from Michael Ford, who was the active maintainer at the time, and he and I collaborated around some patches or ideas. I’d been maintaining things outside of the Python core, things like testtools and subunit, and fixtures and so on, for quite a long time. And we had a plan with testtols, which hasn’t really come to fruition, but we have taken some steps along the path. The plan was to prototype improvements to unittest, and then submit them as patches to unittest, to make unittest better, and, as I said, some of the things we’ve done, like the loadtests protocol, is work that came out of testtools. So, I don’t know whether I should go shallow or deep, I’ll just go deep, see where that leads us. [Brian]: Sure. [Robert]: At test suite, there’s in memory a bunch of test case objects that have been parameterized with a function that’s going to run, and have got their class wrapped around them which gives you hooks into setup and teardown, and helper methods. And they are arranged in a container, a test suite. And the container can have some behaviours itself, so class setup and module setup are implemented as behaviours on the suite object, not on the individual test case. [Brian]: OK. [Robert]: And this is because, if you’ve got thirty five separate test case objects, how do they know to collaborate, to do class setup once at the beginning and once at the end? And the answer is that the suite, which own all of those tests, looks at the class of the next test it’s going to run, to decide if it’s time to run the teardown for the class teardown. And similarly, to decide whether to run the module teardown. But, this is shoved in the core, so if you want to add another suite with its own behaviour, you’ve got no way of doing that from a command line. In 2.6, this was the case. In 2.7 and up, you’ve got a thing called the load_tests protocol, which is a hook unittest will look for when it –the standard loader will look for this when it loads the tests from a package or a module, and it invokes it with the loader, the tests it’s found so far, and I forget what the third parameter is, I think it’s the pattern the users gave. Anyway, you can use that to introspect the tests, and you can filter out tests that you shouldn’t run on a particular platform, for example, if you didn’t want to mark them as skipped, or you can decorate the tests in any way that makes sense. So you can do an arbitrary transformation to your test suite. [Brian]: Hmm. I gotta investigate this a bit more. [Robert]: Yeah. One of the things you can do, for instance, is to, imagine that you had a bunch of declarative tests that you’re going to write as say yaml files. You can shove them in a directory, and, as long as somewhere, doesn’t even have to be above it, just as long as somewhere in your test discovery path, there is a load_tests implementation that knows how to look for those files, and transform them into objects that can be executed like unit tests, then that will be perfectly compatible with any runner that supports the standard unittest contract of creating a loader, calling loader discover, and then executing the result. So this gives you a huge extension point. And we put that together –the history for that is that we put it into bazaar because we wanted to be able to transform a bunch of tests in bazaar in a fairly systematic way, and we wanted to figure out how to also be compatible with standard runners. So we looked around, and there was a bunch of ad hoc ways, so, I think Zope had an idiom of having a test suite method that the runner would look for, and twisted had their own thing, and we were like, OK, so there’s all these different things, but they weren’t sufficiently general, or they were sufficiently general but they didn’t take any inputs so you couldn’t actually make them fast, you know, that kind of thing. So we came up with a good thing, we put it into testtools, we used it back in bizarre, we made sure it worked, and then I went and spoke upstream and Michael and I chatted. He’s like yeah, sure, how would that work. And I said look, this is the exact thing we do, and he said OK, makes sense. He tweaked it a little, to make more sense for the discovery logic in unittest 2, and then it went in to unittest 2 and to the Python unittest module as well. [Brian]: OK [Robert]: [10.19] Following on from that, Michael ended up working on a huge amount of Go stuff for Canonical, and having very little time for unittest. So, I said, look, you know, do you need some help maintaining unittest, and he said yeah, that’d be great, here, have the commit bit. [Brian]: OK [Robert]: That’s how I became a part of unittest’s core. [Brian]: So, is there –are there –how many people are committing to unittest core, do you know? [Robert]: [10.43] On a day to day basis, probably about 0.001 or something. [laughs] It doesn’t change very often. [Brian]: So I have a question about the unittest 2, that’s a backport, right? Is that kept up to date? Is it up to 3.5, for instance? [Robert]: Yeah yeah. So, when I started maintaining it, I said, look, you know, you’ve got huge maintenance costs at the moment. Unittest 2 at that point was not a rolling backport, it was quite old, so it had frozen, like. The genesis was that unittest 2 was a proof of concept of a rearrangement of unittest and some extra features, in a separate SVN repository. And then it got dumped back into Python, in the 2.7 time frame. And then it evolved inside the standard library as people encountered bugs, and some of those got backported to unittest 2, the external module, but it was pretty inconsistent. And then eventually it stopped getting any ports and the standard library one just kept evolving, and anyone who was still running 2.7 was facing bugs because the fixes had gone into 3.3 and 3.4 and they weren’t going back to 2.7, and they weren’t going back to the external module. So there was no path to getting those fixes. [Brian]: Yeah. [Robert]: So I spoke to Mike, I said look, you know, why don’t we go a step further than just having me help with the standard library one, how about we pick this one up and make it a fully ported one, we just port everything into it and we keep them in sync, and you don’t need to think about it, if you want the latest unittest stuff you grab unittest 2, if you are happy with the standard library what you see is what you get, you can just use that and everything will be fine, whichever way you go. And so we did that. There’s now a –I wrote some automation, so I can just take the commits out of Mercurial, filter them down to the unittest tree and turn them into patches and apply them into the unittest repository. And that makes it really really easy, its just run that script, make sure the tests pass on all the versions of Python that we want it to work on, and get on with doing other things. So it’s a pretty fast process. [Brian]: [12.57] So would you recommend anybody that’s using not the latest Python 3.x version to use unittest 2 instead of whatever’s built into their distribution? [Robert]: Absolutely. Just –then –and I’ve had some contentious discussions with some distro folk about that. So, my view is that unittest 2 is the latest unittest for any version of Python. If you’re on PyPy or Jython or Iron Python or Python 2.6 or whatever, then you should use that, because it’s got all of the fixes. It keeps compatibility with the master of Python, not the latest release, so you will actually be ahead, much of the time, from any released Python. The downside of this is you may have the occasional bug, where we commit a bug or we make a bad decision or whatever, but we’re not in the business of doing API breaks for unittest, so that’s going to be an accident, and, you know, we’ll fix it. [Brian]: There’s all these extra packages like testtools and fixtures, and there’s a handful of others, that are –that you maintain, or at least have. What –how do you decide what should go into core unittest and what should remain in testtools or some –or fixtures or something else? [Robert]: [14.22] So unittest isn’t a good place to experiment. Primary goal of unittest is to be the test framework that the standard library is tested on. Everything else is like, hey that’s great, but there’s lots of external testing libraries that people have a great deal of pleasure using, things like pytest and nose have been very very popular. Both of those, I think, are more popular than testtools, for example. So, the general rule of thumb I’ve got is, if it’s, you know, if there’s absolutely no doubt that this is really great, it should go into unittest. If it’s a change to mock, it just goes into mock and the standard library, because, mock is a bit of a special snowflake. [Brian]: Yeah, I wanted to ask about that too. Why is mock a sub-package of unittest? [Robert]: I don’t know, that decision happened before I really got involved in it. But its like –mock plays games with the very heart of Python objects. So it monkeys around with descriptors on classes when you’re patching in a method, and things like that, so it’s really not in a position we can say hey, it’s got a super stable API. It’s not that anyone’s ever gonna try and break it, but that it’s job is to pretend to be any arbitrary object, so, you know, there’s no real need to have a separate experimentation place for it in my view. If you know you’re using it, you know you’re using something with no guard rails, so. [Brian]: OK, go for it. [Robert]: So yeah, like, essentially, testtools is where we try out new things, new ways of structuring the objects within unittest or new ways of writing tests, and unittest is where we deliver production, robust, hard to make mistakes, code. And what I want to do is take a bunch of the stuff we have experimented with in testtools and that we are convinced are good things, and put them into unittest. I haven’t had the time to do that. I may have now, with the new job, I may find that the way I split my time up is different, that could be interesting. [Brian]: OK. Well, let’s hope. You’ll have to at some point forward me your manager’s email and I’ll send him pleading requests. [Robert]: Can do. [Brian]: The –one of the things that its –I really excited –actually I’m really excited that you mentioned that unittest 2 is a rolling backport. I know that mock is. [Robert]: Yep. [Brian]: Because I didn’t know that, and so if people ask me if they can use –if different parts of unittest, you know, I have to say, well, if you’re using version 3.3 you can use this, if you’re using 3.5… [Robert]: Yeah. [Brian]: …you can use extra things [Robert]: No one wants to do that, that’s too much thinking. [Brian]: Yeah, and I know that that’s one of the reasons why the request library is kept out of the Python core just for that same reason, but, I guess it’s too late for mock and unittest, they’re already there, so –but it makes sense, like you said, that the core itself needs to have a way to test itself, so why not put whatever you’re gonna use right in there, that makes total sense. Now, more and more I’m exploring some of the extra things like subtest and, what test scenarios? Subtest seems to be like invisible, there’s only like three or four blog posts that I’ve found that even mention it. [Robert]: Do you mean subtests or subunit, because they are totally different things. [Brian]: Yeah, subtests. [Robert]: Subtests, right, so subtests got added into the standard library, they’re available in unittest 2 because its a backport, and they are a way of providing exposure to the user of a bunch of different cases within some single test. So, for example, if you have something that’s testing whether a function correctly returns this is prime or not prime, you could use the with subtest self.subtest syntax, and then give it a whole list of numbers, and all of which you’re gonna check are gonna be these are prime, and then do it again with a whole list that you’re gonna say none of these should be prime. And, it will give you a richer view on what tests were actually made, like how many assertions you made, and, not that it summarizes them, but if you’ve got verbose mode you’ll see each one as a separate item that this –three was tried and five was tried and seven was tried. And, it also generates unique test IDs, so if you’ve got something that’s not a trivial example like that, thats somewhat bigger, you are able to actually report in something like Jenkins the specific case that failed, rather than just one of the hundreds or thousands of cases that we generated failed. Test scenarios is a much earlier approach to doing much the same thing. I put test scenarios together when we wanted to do interface testing. So we wanted to have four or five things that implemented the same interface, you know, might have been pretty big, ten, fifteen, twenty methods, and it was data storage, so, you know, you’re pushing a lot of data in, a lot of data out, and making sure it comes back in the right form, and that transactions are obeyed, and that all those sorts of things. But we didn’t want to write the same test five times with a different type at the top of it. We didn’t want to use subclasses, because subclasses are very very rigid. It’s very hard to have something that you only have present in one subclass when you’re using subclasses to achieve this. And further, they’re pretty one-dimensional. I mean, yes you can use multiple inheritance, but most people’s brains explode pretty rapidly when you start doing that. And so if you imagine that you’ve actually got two or three interfaces that end up sitting on the same facade, and you want to test that they’re interacting properly, you know, you’re looking at cross product of things you’re testing for, or ways in which the tests might vary. Subclassing is not a clear way of expressing that. So, test scenarios said we’ll create separate test objects at runtime, and we’ll provide an attribute of a test case that will give you the implementation you’re existing with, or the scenario that you want to be able to be poking at on this branch of your set of things you’re comparing. So test scenarios is different from subtests in a couple of ways. One, test scenarios multiplies out the test objects themselves. Subtests works within a single test to admit individual subtest objects. Test scenarios asks for a human name for each point on each dimension. So if you’re doing varying in two dimensions, then you’ll have a human name in dimension one and a human name in dimension two, and it makes you a nice ID with a first name comma second name, and it puts them in brackets, and puts that on the end of your test case method name. Whereas subtests is implicit, it just says, oh, you’ve got a set of these things, these are the values, so it’ll just show the value, and if you have to make sure the string representation of the value is something that makes sense to a human looking at it later. Beyond that, the ability to represent things of equivalent complexity with them is kind of similar. I guess the last point would be that scenarios works at the class level, so you parameterize a class, and you get all of the tests in that class multiplied out by the scenarios you’ve got. Subtests works within a single test, so if you’ve got a class with thirty tests and all of them have to have the same parameterization, with subtests, you are going to have to have a function that will give you the right generator parameters for the subtests, and you’re gonna have to call that from each one of those tests, so it’s going to be more manual to describe it. [Brian]: [22.21] OK, like, I can see that, like both of those are solving a parameterization problem, but they –I mean it would make sense if you need something like that, to take a look at both and see which one works best for your particular situation then? [Robert]: You could even potentially use both in the same project. Or even on the same class, like, they should cooperate nicely. [Brian]: [laughs] OK I’m gonna have to try that, just to see. So, that’ll be fun. I’m realising that I am excited to get you on the phone and ask you a lot of these deep questions, but that I haven’t even, on the podcast, even discussed unittest at all. Just in the last say, month or two, I’ve been trying to explore unittest more, to understand it better, and I have a better appreciation for the project. However, I mean, there are really big differences between, in particular, unittest and pytest. [Robert]: Yep. Sure are. [Brian]: So, do you have any opinions or thoughts about the differences that you wanna share, or just, they’re both good options? [Robert]: [23.24] Well, they’re certainly both good options. Pytest has the benefit of coming along after Python, it kind of really developed a feeling for what pythonic is. And one of the things I love about pytest is, it’s got this very lean feeling, where you’re not –you don’t have this defect that unittest does. So unittest had this thing that this one object, testcase, that has got three different APIs. Its got the API that you use to run the test, so that’s like .run and .debug, and its got the API that you use to describe a test, so assert this, assert that and so on, and then its got the API that you as a user are putting on it for all your own helpers and everything else. So its got three different masters, and setup and teardown are kind of even more awkward in that they sit kind of half in user space and half in framework space. If you think to replace those methods its a template method kind of implementation if you look at the guts of run. But you might up-call and you might not up-call, and you don’t actually get told by the framework whether or not you’ve done the right thing there. For pytest, none of that exists. You don’t have this multi-purpose object that’s sitting there and serving all these different masters. So one of the things I’d like to do, which would be pretty disruptive, but I’d really like to do it, is to sort that out, to have just a single goal for each type within the unittest set of objects. So that you get away from having this tension where you can accidentally break unittest by writing a method on your own class. That doesn’t make any sense really, does it? If I’m writing my class, how can I break this thing over there? So pytest doesn’t have any of that baggage. [25.28] On the other hand, pytest does some stuff I’m not super comfortable with. The way it uses a regular assert statement and introspects it. Holger and I don’t see eye to eye on this, he’s like it’s cool, we’ve made it much better than it used to be, and I’m like yeah, I’m still not particularly sanguine about it. [Brian]: Yeah so my question of will unittest ever get assert rewriting is, not in any time soon. [Robert]: I’m not gonna put it in. If someone came along and said here is a really clean implementation, it’s not a huge amount of black magic sitting within unittest, it’s just a function call to a thing, that’s a helper, that’s in the standard library, maybe in the inspect module, or the AST module or something like that, like, I dunno, three lines of code or something, and it’s not mandatory, people can use it if they want to, not use it if they don’t want to, you know, I wouldn’t object to it going in. The –if the maintenance overheas of it is going to be low, the chance of it breaking and needing to be maintained is going to be low, and the potential benefit that people will like it is high, great, let’s do that. On the other hand, I don’t think assert rewriting is a particularly usable way to write asserts. I don’t –I haven’t had a good experience, and I’ve been using pytest for some projects, like when I’m working on pip, that’s all pytest, and you end up doing horrible multi-line string things at the end of an assert to squash in the error message you want, rather than being able to kind of delegate that out to something that you can share between lots of different asserts in a much more easy fashion. So, I very much like the hamcrest style matcher asserts, which we put into testtools and we do want to put into unittest, and there’s general consensus from a bunch of folk in the python standard libraries space that this would be a good thing to do. We’d like to get matches into the standard library. That’s a matter of time, and also one of the things that testtools does that the unittest doesn’t do at the moment is that testtools can attach pretty arbitrary data to a unittest result. So for example say you’ve got a unittest that’s testing the data storage format for a database. You could take that database directory, zip it up, and attach it to your unittest output, and that would get represented as a MIME type object, so you know it’s binary, you know how long it is, you can ship that around inside your process and you can ship it across processes as well, if you’re using some testing protocol that can carry that sort of data. And there’s no facility for that in unittest itself. Unittest only knows about backtraces. The ability to have a rich assert, your assert may be a thing that generates rich data about what’s gone wrong. So, those two kind of things, if we don’t want to introduce an API and then revise it shortly after, we have to solve that other one first, which is kind of the next one on my to-do list. [Brian]: OK. So there are –there’s definitely improvements coming in the future then? [Robert]: There’s definitely improvements I’d like to do. Getting the time to do it is probably the big question. [Brian]: Yeah, well, that makes sense. Unittest for instance, for one, is –shares a history with the other x-unit style frameworks. That’s a benefit for people coming from other languages of course, if they’re used to different test –those types of styles. But is that ever a drawback, that you wish it wasn’t sharing all of this heritage with, say junit and others? [Robert]: No, at the heart of it, the problem with the x-unit structure is that it favours inheritance over composition, rather than favouring composition over inheritance. So there’s no, sort of direct way in to write composed unittests rather than inherited unittests in unittest itself. Now if you look at things like fixtures, and matches, which will respectively let you describe things that you want for your test, and describe rich assertions that you want to take place, without requiring a class hierarchy related to your tests. You can get away from that, you can start writing small targeted things and composing them the way you want them to be composed. So, you can do that in collaboration with the unittest framework, but it doesn’t lend itself to that. So, that’s kind of the biggest downside to most of the x-unit structures out there. That said, you know, it created the unit testing revolution that we’ve got today, it made it manageable and approachable, and it’s very simple to sit down and write an x-unit framework, there’s not a lot to it, because it’s got a small number of moving parts and you don’t have any particular language features needed to do it, so it’s kind of a –on the one hand it gives, on the other hand it takes away. [Brian]: OK. Pytest has a lot of hooks that you can put in place to extend it, whereas the main way to extend unittest is to subclass, –or –from the different parts. Is there ever –is there a reason to not put hooks into different places? Or is just its the recommendation to subclass? [Robert]: [30.47] So, the current hooks depend on this called substitutability. You put something in place that has the same behaviour, it’ll still work, within Python’s unittest anyway, because it’s a duck-typing system. So you don’t have to subclass, but the easy part is to subclass, so I guess that’s what I’m trying to get at. Is there a reason to have, or to avoid other hooks? No, look, you can certainly put other hooks in. I wouldn’t want to have hooks that are global in scope though. So I’d want to have things that hook into the lifecycle of a test program and that don’t prohibit you from using the API within a larger program. So right now you can have two different test suites in two different unittest threads and you can run them, and there’s only a small number of things that won’t work properly. [Brian]: OK [Robert]: [31.43] The things that won’t work properly are things like the standard out capturing, which depend on monkey patching… [Brian]: Oh yeah [Robert]: …standard out and standard error. Obviously, those things aren’t going to work terribly well, because they’re globals, but nothing else is global. So, the loader behaviour isn’t global, its parameterised by parameters you give to the loader when you create it. The behaviour of individual classes is parameterised by putting attributes on the class, so all of these things are local scope, and that’s a good thing. What you generally want to do when you want to have hooks that will let you do other things is apply something semi-globally, so you’re gonna apply it over the entire context of a test suite, but you don’t want it to actually be global in the Python process, that’s the distinction I’m trying to draw. [Brian]: OK [Robert]: So I want to retain the ability to use unittest as a good library, to be a well-behaved library citizen in anything that we do do going forward. That said, I am keen to have some more hooks that give a really clearly defined lifecycle, just need to find a way of doing it following this pattern. [Brian]: OK, that makes sense. You’ve used pytest, then, sounds like? [Robert]: Yeah, absolutely. [Brian]: Pytest –the pytest fixture model is very different than setup and teardown. It does feel different when you’re using it. I was just cursory looking at the fixtures package, there’s test resources and there’s testtools.fixtures, are those all sort of the same thing? And do they relate any way to pytest-like fixtures? [Robert]: There’s also Chris Withers' testfixtures and there’s a bunch more out there. So, the history of the ones –so the ones you mentioned are all ones that I’ve spun up. Testtools.fixtures is just the fixtures that testtools itself has. Fixtures is a standalone library that defines the fixtures contract. And a fixture is essentially a super context manager. [Brian]: OK [Robert]: So, a standard context manager, you can enter it, you can exit it, you can enter it again, you can exit it again, and that’s really useful. So if you think of a context manager that gives you a working postgreSQL database, that’s a good thing… [Brian]: Yeah [Robert]: …for testing, right, you just go with my database, do some stuff. But from a testing perspective, you often want a bit more. One of the things that you want in a big test suite is, maybe you want to reset to a blank slate, but you don’t want to get rid of everything. So if you think about a postgreSQL database, maybe half of your setup time is initialising the server and getting the process running, and then a small fraction of it is actually creating the database you’re gonna do this test in. So, one way of structuring it would be to have two separate fixtures, one for the database, and one –sort of the server one for the database, and the –when you exit the database one, you can enter it again to get a new one, but it depends on that server still running. And that would be OK, but in actual fact you can get another performance optimization by not throwing away a test database, and instead resetting a bunch of the internal parameters. That like –this is what happens if you talk to a database administrator and say please make my test suite fast. And it’s fantastic, but you’re not actually throwing away that test database. So, fixtures introduces a reset concept where you can say I want to start over, but I don’t want to tear you down and bring you back up again. So you can take a shortcut, you can be faster if that’s possible for you. [Brian]: OK. [Robert]: And you also want the ability, if something goes wrong, and you’ve got multiple, like a graph of things that you’re using, it’s nice to be able to report on all the things that went wrong, not just the first one which required some different –so there’s a bunch of little stuff like that. [Brian]: So those –so far that sounds a lot like the pytest fixture model. [Robert]: [35.12] So, I believe that pytest fixtures were inspired by the testtools fixtures. [Brian]: OK. [Robert]: But not derived from. So they said hey that’s a good idea, and then did something that worked well in their context. And I think that’s completely sensible. [Brian]: One of the benefits of coming second. [Robert]: Yeah, absolutely. You know, look, if I can sit down and say hey pytest has got a whole bunch of wonderful stuff and some of those things were based on stuff I did, that’s cool, you know. Everyone’s winning. [Brian]: Whether an exception is an assert or any other type of exception, really matters in unittest, and it does not matter in pytest. [Robert]: [35.52] Yeah, so I mean one of the things there is to –you should never use teardown. You should never ever use teardown. You should use clean-ups. [Brian]: Yeah, I agree. Because, you can’t really do a bunch of things in teardown and have them all happen. [Robert]: So there’s that, but there’s also that if teardown –if setup fails, because of an exception, any class of exception, if setup fails, teardown doesn’t run at all. [Brian]: Right. [Robert]: But cleanups do. [Brian]: Oh they do? [Robert]: Yes. [Brian]: Cleanups will run even if the setup fail. [Robert]: Yep. [Brian]: Ah, OK. So you’d have to add the cleanup at the point where you know that there’s something to clean up. [Robert]: Yeah. [Brian]: Yep, OK. [Robert]: I mean, that’s the best practice. I mean, arguably Python can be interrupted between any two bits of bytecode, right, by control-c, so generally, you’d add the cleanup right before you start using the resource. Say you are going to make a temporary directory. That one’s really hard to do safely. It’s almost impossible to not leak a temporary directory in Python. I need to do a blog post on this, how ridiculously hard it is to do this correctly. It’s very easy to not leak it in the common case, but just making sure someone can hit control-c between any two bits of bytecode, that’s the hard thing. [Brian]: Yeah, yeah definitely. So, I guess, the other bit was I really hate the name unittest, because people believe its just for unit testing. And it works just fine for all levels of testing. [Robert]: Yep, absolutely agree. And, I mean, unit test isn’t even a well-defined thing. Some people will tell you it only tests a single bit of code, but then they are using classes from everywhere that have dependencies tangled throughout their whole code base and things, and the other side of the program can cause it to fail, its not really testing just that one thing, and the other can say it is, because I’ve mocked everything out, and then you go, so, really you’ve got no idea if your code works or not, because you’ve got mocks that don’t tell you anything about what’s happening. [Brian]: Yeah. [Robert]: You know, they make the assumption that that thing over there hasn’t changed, how – [Brian]: Yeah that’s a nightmare. I don’t even go there anymore, cos I think people that believe they can mock everything and have a workable system are just smoking something [38.21] Lot to think about here. I actually –I’m gonna go ahead and just assume this is just part one, I’m definitely gonna have to get you on again. [Robert]: [laughs] Sure thing. [Brian]: Pick your brain. Unittest is just in the standard library, but is there –how about you? Is there a place for people to find out more about you or get a hold of you if they need to, or? [Robert]: Err, the testing in Python list is probably the best place to grab hold of me for things about Python testing. I’m also on the Python testing IRC channel. [Brian]: OK. [Robert]: I mean if its something to do with a specific project, talk in that project’s forum is usually the rule of thumb with open source, but I try and be pretty approachable. My IRC nickname is lifeless, so you can get me there, or at RBT Collins on twitter, and I’m happy to talk, you know, just about anything. [Brian]: OK. Before we go off the virtual air, anything you want to cover that we haven’t already? [Robert]: So, I think, no. Like, go out and write tests, that’s an incredibly useful thing to do. Probably the only thing I’d say is, you know, don’t be afraid to fix bugs in unittest. It’s not static, we take patches. And if they don’t get responded to quickly on the Python bugtracker, come and ping me. I get busy and I don’t always go and look at it as often as I should and I’m happy to be reminded that there’s something there that needs a review. [Brian]: Alright, thanks a lot, and we’ll hopefully schedule another one sometime after I absorb all of this information. [Robert]: Sounds great Brian, thank you, thank you for having me. [music] [Brian]: Wow, what an interview, right? I hope that rattle noise wasn’t too annoying. If you’re an audio geek and would be interested in partnering with me to make the podcast even better, I’d love to talk with you about it. Show notes and links are at pythontesting.net/19. Special thanks to my wonderful Patreon supporters, visit patreon.com/testpodcast. And again, Patreon is P-A-T-R-E-O-N. If you’re already a supporter, thank you. If you’re not supporting it, please consider it. Thanks a lot for listening, and get out there and test some code. I hope you enjoyed it. Thanks. [music]