Completely nerding out about pytest markers with Anthony Sottile.


Transcript for episode 143 of the Test & Code Podcast

This transcript starts as an auto generated transcript.
PRs welcome if you want to help fix any errors.


00:00:00 On today’s episode, Anthony Sotilly joins the show again to talk about pipe test. This time we talk about pipe test markers, one of my favorite features of pytest.

00:00:24 Welcome to Test and Code. How’s it going? Was pretty good. I think I ask you this all the time. Let me guess, this time you’re in the Bay Area.

00:00:39 Yes.

00:00:40 Okay. I got it right.

00:00:41 I’m in San Mateo.

00:00:43 Well, thanks for joining me. So, Anthony, we’re going to talk about markers. Markers love markers, but I don’t think they’re understood very much.

00:00:51 Yeah. They can also be really tricky.

00:00:53 Hopefully we can talk about a lot of it. So as a review, I know you know all this, but let’s review for people. So there’s a couple of things that we use. A lot of stuff we use markers for people will see them when they’re parameterizing because we do, like by test Mark parameterize and skip or filter warnings or skip if and X fail and stuff like that use the marker mechanism. But mostly I don’t really think of those as markers. I think of those as their own functionality. What I think of as markers is when I say like bytes, Mark Foo or my own whatever my own thing is, I can Mark it to be anything. And then you can use that to use the dashm and run the tests with those marks with the custom marks.

00:01:36 Yeah. I actually wish the concepts were separate because I feel like when I try to explain marks to people or I try and explain primaries to people or one of the other either or it’s really easy to get the two confused. And I feel like the functionality that’s provided by the built in marks is so different than the traditional usage of markers. I realize I keep saying marks and markers. The actual word is markers, but even the documentation uses the word marks occasionally. And so it’s a little inconsistent. Markers, I got to say markers.

00:02:08 Okay. So the preferred term is markers.

00:02:12 Yeah.

00:02:12 Okay.

00:02:13 As far as I understand, the documentation uses the word markers way more often than it uses the word marks.

00:02:18 Okay. But marks does show up at least a couple of times.

00:02:21 Yes.

00:02:21 Okay.

00:02:22 We’re not the most internally consistent there.

00:02:25 Okay. So normally when you’re teaching people how to use these two concepts, you separate them then that the custom markers is a different kind of a beast than other stuff.

00:02:35 Yeah, I talk about them completely separately.

00:02:37 Okay. That makes sense. So let’s talk about that side first, then the custom workers thing.

00:02:43 Sure. There’s something that came in. I think it was going to get it wrong somewhere in the five X range or there was a change made that you have to declare them ahead of time.

00:02:56 Yeah, I think that was right on. I can’t remember whether it was pipe four or Pi test five, but one of the two.

00:03:01 Okay. And it’s not difficult so you throw it in a pipe test any file or something to say. I can’t remember the terminology. I’ll throw it in the show notes, but it’s like something like markers or something like that. And then you just list the ones. The cool part about that though is at first I was sort of annoyed, but there’s a reason. What’s the reasoning around having to declare them?

00:03:22 Yeah. So the original bug report or I guess feature request or breaking change request that was brought up about this was it was way too easy to accidentally type a Mark name and then have tests that were either slowly, never run or weren’t inheriting the behaviors that you expected because of a particular Mark. And so the request was to make a way such that if you were to type a Mark name, it would show up as an error to the test writer. And so by requiring the test writer to document each of the names marks inside of the configuration file, there was kind of an allow list. It’d be compared against to say oh, this is a valid Mark versus this is an Invalid Mark. Right.

00:04:06 And I can’t remember for a while it was just warnings, but do they show up as errors now?

00:04:11 I believe it’s been upgraded to an error. Just try it really quickly.

00:04:22 You’re so quick to be able to just do things on the fly to try them out.

00:04:25 It is still a warning.

00:04:26 Okay.

00:04:27 But you can upgrade the warnings to errors if you want to.

00:04:29 Okay. Especially in a work environment or a project that I have published. I usually do upgrade all warnings to failures.

00:04:38 Yeah. It’s usually a good best practice.

00:04:40 Yeah. Because warnings is a problem mostly.

00:04:43 Yeah. Almost always the warnings are correct. And so I usually find that it’s a good idea to upgrade them to errors.

00:04:52 Thank you Pie Charm for sponsoring this episode. I first tried PyCharm when they started supporting pytest many years ago. Their support for pytest is now amazing. I was a long time Vim user, so next I needed to test the idea Vim plug in so all of my finger muscle memory still worked while editing. Check. It works great. There’s lots of reasons to live by PyCharm, but for me it is because they have the absolute best user interface for test automation. Then I learned many more ways by Charm can save me time. Like really great support for editing markdown HTML, CSS, JavaScript, remote connections to database, and amazing version control support. Really. It’s the best get diff tool I’ve ever used. And now version 2023 is out and the shift shift, the Find anything key sequence even lets you search, get commit messages. What even that is so awesome. Tons of other cool features have been added in 2023. Check it out and I hope you enjoy it at testandcode.com. Pycharm I just actually put this in place in one of the test suites. I have. I was trying to debug the whole suite, and I was annoyed that occasionally I’ve got some of the tests that were there intentionally to try to investigate some problem that didn’t happen very often or something. Or we’re really just doing every single little bullet like data point to make sure that it was thorough. But we don’t run them all the time if we’re working in that area, we’ll run them during the nightly test and stuff to make sure. But marking them with there’s a thing in the documentation which is actually fairly cool, that there’s a hint on how to Mark things as slow. So you can just say pipe test Mark slow and slow is not magical, it’s just a name that the example chooses and then how to set it up so that by default they don’t run. But in order to get them run, you have to pass in a flag to say run these. Now that’s not built into the bytes defaults marker behavior, but it is a cool trick.

00:06:57 Yes. There’s also plugins that let you do this kind of automatically as well. So plugins can hook into the marker mechanism and specify specific behaviors just like the one you described.

00:07:09 Yeah.

00:07:09 And also when we say that you have to declare your markers, if you got it from a plug in, there’s a way for a plug in to add a declaration too, so they show up. One of the other things I really like about that feature of having to declare, though, is while you’re at it, the format encourages you to like, have a string to say what it is you do, like marker name, colon and then what the markers for? And then it shows up. What is it, like markers or something? You can list all of them and you can list them and it shows yours also. And that’s neat. Okay.

00:07:44 Let’s say in my case, I want to run all the slow tests.

00:07:48 Well, if I had the functionality turned on so that it didn’t run automatically, I’d have to run the flag to say to run those. But I also can pass in M slow, and it just runs the ones with those marks on it. So that’s the idea, right? I mean, one of the ideas you can take random tests throughout your like, let’s say I have like, Smoke test, so I Mark different fairly quick tests around all around my test suite as Smoke. And then if I ran Smoke, it will run all of the ones that were marked with smoke. And that’s really cool.

00:08:23 Yeah. One cool use case that I did for this is I have a tool that I maintain that makes network connections as part of its test suite to make sure that things work end to end. Occasionally when I’m flying places, which hasn’t happened in a while, but occasionally when I’m flying places and I’m too cheap to pay for internet on the airplane. I’ll use a marker to say like this one requires network, and so I’ll use dash M, not network, and filter out all of the tests that require an Internet connection to run.

00:08:53 Yeah, and then not is cool. For instance, if we didn’t have the special mechanism to not run slow test, you could set it up in your default by testing or something to include not slow. So run all of them that don’t aren’t marked with slow. But you can also do let’s talk about those expressions you can use not, you can use or and to combine those. That’s pretty cool. Am I missing anything? Can we do anything other than and or not?

00:09:18 So you can do parentheses as well? This mini selection language has actually changed pretty recently. It used to be arbitrary code execution, and so it was really hard to get the actual right thing out of it. Basically it was just thrown into Eval and hope for the best. But recently I believe it was part of Python Six. Both dash K and M were rewritten to have a very specific set of rules and operators. I believe everything that spoiled right now is and or not in parentheses. I think that’s it.

00:09:50 Okay, and you know what the precedent order is for Andor in that.

00:09:56 So it should be the same as the order for Python. So and will buy higher than or, and not will buy higher than and or, and then parentheses will do grouping.

00:10:07 So if I want to think about these expressions, I can think of all the marker names as just true or false values.

00:10:13 Then yes, you can think of them as boolean.

00:10:15 Okay, that’s cool if that’s what it ends up being.

00:10:19 Yeah, there’s a little tokenizer that splits it into kind of an abstract syntax tree and then parses based on that. So it’s a little mini language.

00:10:27 Okay, cool.

00:10:28 Yeah, like I said, these are fun things.

00:10:31 For example, some of my tests were slow, but we also have some tests that require special configuration, special setup, and I’m talking about testing hardware stuff. So they all have to get equipment need to be wired up. But we have like a standard wiring set up, so the tests don’t have to declare anything if you’re using the standard wiring. But for some functionality, we need to change the wiring up. So for those we’ve marked those tests to say this is the special wiring system. For instance, I can run all the ones with the two by two wiring and with just the marker flag. And I can then combine that with the slow. Just say I want to run all the two by twos, but I don’t want to run the slow ones, for instance, or something like that.

00:11:18 Yeah, you could do by two by two and not slow.

00:11:21 And then this is on top of the keyword language, right? So this is completely separate, but it works similar so it’s the same syntax with keywords I can do and or not in parentheses.

00:11:35 Yes. They use the same little mini language.

00:11:38 Neat.

00:11:40 Which one runs first?

00:11:42 Conceptually, I don’t remember.

00:11:44 They’re ended together, so I think that would actually doesn’t matter.

00:11:47 Okay. So I can use the keyword stuff to even just within that set if I wanted all the two by twos and not the slow ones, but then I can also use the keywords to select a subset of the test name or something to say yes. Okay.

00:12:04 It’s pretty powerful, and I don’t really use other testing languages like unit test or anything else, so I’m not sure if this is a unique flexibility of BYTest, but sure, it’s handy.

00:12:18 Yeah, I haven’t seen this anywhere else. I guess there’s one thing that I want to talk about that’s slightly different between dash K and dash M. And you brought it up with the substring thing there. I believe with dash M, the marker name has to match the entire market name, whereas with dash K it’s a subset of it. So it’s slightly different there, but basically the same mechanism.

00:12:36 Yeah. Okay, that’s good to bring up, but the Boolean logic is the same.

00:12:40 Using logic with the keyword is super handy, too. I use it all the time, and I also do like the effect only.

00:12:47 Yeah, that the one that just shows the ones that would have been run but doesn’t actually run them.

00:12:52 Yeah, it doesn’t actually run them. So if I’ve got like ten minute test suite or something like that and I want to narrow it down, it’s handy to be able to just run the keywords and then see if you got the right ones. And if you have way too many tests, you’re like, oh, I need to filter that more. Well, then you can work on your keyword expression before running them. So that’s handy.

00:13:10 Yeah, that also works for M as well.

00:13:12 That’s a great point. We talked about this briefly at the beginning. That the pipestmark something is the same syntax that’s used for parameterize and skip.

00:13:24 And parameterize is a bit of a maybe a special thing, but maybe it’s not. Do all these use the same mechanism? Why are they the same?

00:13:33 So they do and they don’t to some extent, like they do all go through the marker mechanism, and so they are registered in exactly the same place that the user space markers would be registered. And you could write your own version of parameterized in that same space and use exactly the same syntax and then build a plug in on top of it that would generate tests similar to how pytest does. And all of that is using the base marker functionality. They are a little bit special because they’re treated as first class citizens within the framework, and they have some special argument parsing, but from like a plug in writer perspectives, they’re not all that different than traditional marks. The one main difference is all of the pipe test ones take I guess they don’t all take arguments because X fail can be argumentless and Skip can be argumentless. But pretty much all of them take arguments and do special behaviors based on those that you could write your own custom markers that also take arguments and do special stuff as well.

00:14:31 That’s a bit of a weird mind jump when you go from a couple of things there. I really think it’s actually now that I think about it, it’s kind of neat that the parameterize and skip, for instance, use the Mark functionality because you can use those then those examples that are in the code to create some fairly powerful plugins that just basically work using the Mark mechanism, and there’s examples there to use that. That’s pretty neat. The downside is, I mean, I often want to just teach people how to use parameterize, and it has nothing to do with how marks really how marks work.

00:15:08 Yeah.

00:15:09 Why isn’t it just Python prim? What is this Mark thing about? Oh, well, that’s influenced me down.

00:15:17 Yeah. But then I wrote that gift with the exploding head.

00:15:23 That’s what I felt like when I found out that you could pass arguments to the marker. So if we say, like, pipes Mark slow, you could also say slow and then put parentheses and put like, past arguments to Mark.

00:15:37 You can add metadata using parameters like that. Yes.

00:15:40 What do you mean by metadata?

00:15:42 So, like, if you want to say a test is a slow test, and what else would you say about a slow test? Maybe it’s slow because it takes network, or maybe it’s slow because it takes disk or something. You could say like Pytech Mark slow, network equals true or disk equals true and store both keyword arguments and positional arguments as part of that marker.

00:16:01 Okay. Positional and keyword arguments. And that’s like, for instance, when we skip or X fail, you can pass a reason in. So would the reason be metadata then?

00:16:12 Yeah. That gets attached to the marker itself.

00:16:14 Okay. And then what? There’s a plug in. So you got like a hook function or something that can then read whether or not the Mark was added to. Yeah.

00:16:25 There’s a bunch of places within pytest that give you access to the item, as it’s called internally, which is the representation of the individual test that you’re working on. And that test, as well as all of its parents, like a test parent is either a class or a module or a package.

00:16:42 There’s essentially a hierarchy of items there. And on each of those, you can query the marks that are active at that spot.

00:16:51 The APIs have changed here a little bit, so I believe the API is ITER markers. There’s also get closest marker, which might sound a little bit weird. And there’s two different APIs that can be a little bit confusing. The former. The ITER markers. One is more intended for markers that can be stacked for most of the marks that we talked about that are user base marks. You would kind of just have like one or zero. Like you would either be slow or you wouldn’t have slow. You would have absence or presence. But it might make sense to have primarily things that you might Mark multiple times. So, for instance, like skip if as an example, that is a Mark. And again, we’re crossing the streams a little bit here. But Skip if is a marker that you might apply multiple times. You might skip if it’s Python Two or Skip if it’s Tuesday or something like that. And you might stack those particular markers. And so you have to iterate through them to consider all of their conditions.

00:17:44 Okay, that’s a good example, because I always thought so. Inner markers you pass in a name I’m like, why would that why would it be inner markers? Why would you have more than one?

00:17:55 The other reason for Get. Closest Markers is you can also Mark things at different levels. You might Mark the test, you might Mark the class that contains it, or you might Mark the module that contains it. And Get. Closest gives you an opportunity to override based on those. You might say maybe your module level says that all of these are slow, but you have one test in the middle that’s like slow equals false. For instance. I don’t know how you would represent that in actual markers, but you might have some metadata and reassign it at a more granular scope. So Get closest Marker would give you the one that’s the closest to the actual test definition.

00:18:31 Okay, well, skip if would be an example for that, too.

00:18:35 You might have a different reason.

00:18:36 Yeah, overriding the reason would be a good idea.

00:18:38 Yeah.

00:18:39 The thing with Skip if though, it’s kind of weird in that skip if it’s only additive, you can only add more conditions where you skip stuff. You wouldn’t be able to subtract out stuff there. But I actually don’t know that there’s a mechanism for removing markers of particular, so I don’t quite know how that would work. But that’s why there’s two different APIs.

00:18:57 If I’m playing with this and I’ve got a hook function and a plug in and I just want to check to see if the item has the marker or not. Would I reach for Air markers or Get Closest Marker? Is there a default that you usually look at?

00:19:10 I mean, you could use either of them. In that case, I would probably reach for Get. Closest marker, which is the replacement for the old API. The old API was just called Get marker and basically return to a marker or none. The other cool thing about this is you can also access these in individual tests so you don’t even have to write a plug in for this, you can use the request fixture, and that will have access to the test nodes and subsequently the markers.

00:19:33 Okay, now why would I need that at a test level?

00:19:38 I’m writing a test. Can I just look at the source code and see that the markers er or not?

00:19:42 For sure. Yeah, it’s not so useful at the test level. I think it’s more useful at fixture levels. So you might have a fixture that acts differently based on whether a test is marked in a particular way. I’ve used this in a fixture to change the behavior based on a particular marker.

00:19:56 Okay. But it doesn’t make sense. If I was just playing with this mechanism and trying to understand it, maybe at a test level might be the best, because I can see if I add this full marker, I can see that it shows up in my test through request.

00:20:10 Yeah. And it’s definitely like the easiest way to start playing with this because you don’t have to understand all of the hook stuff and writing a plugin and all that other things. You can just write it in normal tests.

00:20:20 So Get Marker is an old thing because Git marker seems like the most obvious name for something like that. So why was that bad?

00:20:28 So there was a big rewrite of the markers that happened. I want to say it was in four X.

00:20:34 I have this open misremembering in three, six. pytest did a big back end factor for markers, which were particularly tangling. And this also made a lot of the built in markers a lot simpler. They used to be a little bit more ingrained in the actual implementation of Pytos, and there was not a great way to represent all of the situations that people wanted to write with. Markers and get Markers didn’t really support the proper overwriting or the stacked markers or that sort of thing. And so we made a breaking change to the implementation of Get Marker and decided the best way to communicate that breaking change was to change the name of the function as well.

00:21:20 Okay.

00:21:21 So that’s why it changed.

00:21:22 Okay, so we’ve got really two that might make sense. Is there markers or Get closest?

00:21:27 Yeah. There’s actually a page in the documentation that talks about this particular bit. I’ll send that to you and you can put in the show notes.

00:21:34 That’d be great. I mean, to be fair, most people using. So this is of particular use for people that are like writing plugins or doing some fancy stuff.

00:21:44 Yeah. This is more for plugin authors than users.

00:21:47 And not to say that there’s a lot of functionality that people put in comp test files that are essentially plugins that they’re local. So you might use it even if you thought you weren’t a plug in author.

00:21:58 Yeah, that’s true.

00:22:00 I guess you may be a plug in author if we need one of those, like joke books or something.

00:22:07 Right.

00:22:07 But I’m sure it sounds terrible over a podcast because it’d be good to look at source code, but it isn’t as scary as it sounds. It’s a pretty cool system, but the system of I want to Mark a whole bunch of tests and be able to run the ones with those marks or not those marks. That’s pretty easy to do.

00:22:29 One of the things that I think is a cool use for the adding parameters to markers is to be able to like, maybe the reason thing, maybe that changed or something, but with your own, like even with a conference fixture or something. If I had a fixture that does some work, it’s one way to have, like, past logic from the test declaration to the fixture because the fixture can read out the value of that marker. It’s one way to pass data from the test to the fixture. And I don’t know if there’s another way.

00:23:02 I think beyond indirect parameterization, I think this is the only other way I could be wrong, but I’m pretty sure this is the way you’re supposed to communicate with pictures.

00:23:09 Okay. Indirect parameterization, you can. Yeah, there’s that.

00:23:15 But that’s also a Mark. So it’s not actually all that different now that I think about it.

00:23:20 Yeah, but it’s kind of like a neat thing because often when you push work into a fixture and it works for 90% of the cases and then you run into that one corner case where you need something special to happen, you’re like, Darn, I pushed all that code into a fixture. I don’t know. I haven’t ran across a good reason to pass values from a test to the fixture. And to be clear, it’s not really passing values at runtime, right. It’s at right. Like author write time when you’re typing the thing, you can say, oh, I need to tell the fixture that for this test we only need four ports open or something like that.

00:23:59 Yeah.

00:24:00 Actually things like Word address or something like that. That’d be a cool way to pass that too.

00:24:05 Anyway, just thinking, I’ve mostly seen this used for the argument stuff. I’ve mostly seen as a mechanism for plugins to have special behaviors. For instance, like, is it pytest repeatedly, pytest repeat, repeats? I don’t know. One of those names is the actual plugin name, but you Mark a test with Pytest, Mark, repeat and say like, N equals ten, and then we’ll generate ten tests for you from that. And so, like, I’ve seen similar to that.

00:24:33 That’d be cool.

00:24:34 The link plugins based on it.

00:24:35 We talked about parents a little bit. This is kind of a tangent. So if I have a parameterized test, for instance, it just counts one through ten or something. So it’s going to run ten times that set of tests that is the test and all its parameterizations. Is that an entity at all or does that disappear?

00:24:58 Yeah. So it depends on which part of the process you interrogate to ask if it’s an item or not? Pre parameterization. It’s not. But post parameterization. Each of those is an individual test function, but with a different parameter set.

00:25:13 Right.

00:25:14 But the collection of all the tests that were parametered that resulted in the test and all its parameterizations that as an entity doesn’t exist within the Bytes system, does it?

00:25:29 It kind of does, but it also mostly doesn’t. It’s kind of a Gray area between existing and not. You can access the particular function that would have that one primarily marker on it. But I guess it depends on what you would want to do with that. If you want to change the whole collection of those, the word collections, the whole group of things that are that one particular tag.

00:25:52 Well, the use case that I mean, I don’t know about knowing whether markers really are play into that other than parameterization. I bring it up because I often want to have a fixture that just runs before and after all of the tests, all of the parameters.

00:26:09 And so that seems to be a scope that’s missing.

00:26:12 Yeah. It would be somewhere between functioning class.

00:26:15 Yeah.

00:26:16 Because the function scope is going to get set up and torn down every single parameterization combination.

00:26:21 Yeah. I don’t know that there’s one that exists in that scope. That would be an interesting feature idea.

00:26:26 And the workaround that I came up with really always when I need that is class.

00:26:31 Yeah.

00:26:32 Wrap it in a class.

00:26:33 Wrap it in a class and have a class level class scoped fixture. But it seems I don’t need the class for anything else.

00:26:40 Yeah. Basically just building your namespace. I’ve done the same hack.

00:26:44 I completely understand. Okay.

00:26:47 So the parameterization, since it’s a marker thing, it is like a plug in then sort of.

00:26:53 Yeah.

00:26:54 So the core pipe test doesn’t really know about parameterizations. Or does it?

00:26:59 I mean, it does. It doesn’t. I think parameterizes the one with the big asterisk next to it because there’s so much special treatment of parametrize inside a Python that I would I think of it as a first class thing that’s separate from markers, but all the other ones, I think are less special. The skipping and warnings, filtering and exhaling, those are all less special.

00:27:22 Yeah. Okay.

00:27:24 One of the things that came up as I was looking through the documentation to get ready for this, looking through the documentation, it says the pipeline docs say pytest Mark is a factory object.

00:27:37 What does that mean?

00:27:40 Right.

00:27:41 I think what it’s trying to say is that he’s a little bit metaprogrammary magicary in that it allows you to construct an arbitrary object just based on an attribute name.

00:27:53 Okay.

00:27:54 It’s a builder of marker objects, I think.

00:27:57 Okay.

00:27:57 It is kind of weird that it just says, I don’t know, it really doesn’t really mean anything to me.

00:28:12 Okay. Well, I have just a bullet to ask you, are there any cool marker tricks that you’ve used that you can share beyond the like.

00:28:21 Data passing from test fixtures and the basic usage with dash M, I don’t really do anything too trickier special with them. I think the use cases that we talked about earlier, probably the most interesting ones, which are segmenting off a portion of your test suite based on markers and then selecting them based on the dash M argument or an additional argument that you add with comp test or with plugin.

00:28:46 Yeah. And then one of the things that like segmenting stuff off if you’re essentially the keyword stuff, there’s a little bit of a weird demonstration around stuff. So depending on when you decide to not run a test, either shows up as like, I can’t remember unselected deselected. Deselected.

00:29:04 Yeah.

00:29:05 Or if it was selected, but then later based on a marker you decide to skip, it shows up as a skip. Like for instance, on my slow example, it’s implemented as skipping. So if I decide if I select if I got a file with ten tests in it and I Mark like two of them with slow or something and run just those are the rest of them marked as skipped then? Or they are deselected.

00:29:32 I believe they’re marked as deselected. Let me see, they are marked as so it’s selected two items, one deselected, one selected. So with dash M, it’ll Mark them as selected or not. And that’s the same as what it does with dashk as well.

00:29:50 Okay.

00:29:52 You could make a fixture that notices a Mark and then raises the skip as a side effect. And so then you could change your selection based on another argument that’s different than K and have a status that would be skipped instead, but out of the box it’s selected and deselected.

00:30:10 Okay. Is there a way to at runtime modify the selection? Yeah, there is like you have to intercept the collect items or something like that, right?

00:30:20 Yeah, I think it’s pytestmodify items, if I remember the hook name.

00:30:23 Okay.

00:30:23 Which I’ve used a couple of times, that one is I find overriding that hook can be really confusing to end users because it kind of slightly changes what they’re testing without really telling them what’s going on.

00:30:36 I’m always a little hesitant to override that one, but yes, you can use that and inspect March at that stage, I believe.

00:30:43 Okay. And actually I think it’s a good idea to if you’re going to dynamically do something like the example I’ll link to the example about the slow thing, because it’s a cool, small example and it’s easy for people to put in place. But all the ones that you like, if you Mark it with slow and it doesn’t run, it’ll show up as skipped with a reason why it didn’t run.

00:31:04 Since I’m prone to make mistakes when I’m hacking and stuff like that, I think it’s a good idea to have that be if I had the power to just deselect it. You don’t get the same warnings, you don’t get the reason why it was deselected, I don’t think.

00:31:18 Right, yeah, you don’t.

00:31:20 Which it would actually be annoying if you have 1000 test and code running one and you got like 999 reasons why the other ones weren’t run.

00:31:30 You’d have to sift through a thousand lines of output just to get you one status of all.

00:31:35 Okay, well cool.

00:31:38 There are a couple of concepts that are blended in with actually a few concepts. The idea of just marking tests and then running them based on marks. That’s one idea using a bunch of the either built in marks like X, fail or parameterize. Those are all things that you kind of master either by themselves or sort of in groups like skip, skip. If an XML all kind of work the same.

00:32:04 And then there’s a little bit more advanced within your comfies code or your hook functions or plug in to be able to read and tell whether or not something is marked which is cool. And then on top, then a little bit more advanced is doing things like passing parameters from a Mark to a fixture. But then if you kind of take those if you want to explore all of the functionality, if you go in that order, it’s really not. They aren’t huge jumps, especially if you’re looking at examples.

00:32:36 Yeah, I think it’s pretty easy to go from the basics to the more advanced stuff with markers at least. Yeah.

00:32:40 And I’m glad you brought up. And I have to look up which one the repeat. I’ll look up the repeat plugin, and I think there’s actually a few that kind of do the same thing. But throw that in the show notes, because that’s also a good one.

00:32:55 I actually never looked at the source code for it, but it seems like that would be a good one to look at to see how are they doing that? If you really wanted to implement a custom version of that, how would you do that?

00:33:05 If I recall correctly, the way it works is it gets the input number and then generates a range of integers and then parameterizes. I think it’s just a shortcut for parameterize on a variable that doesn’t exist.

00:33:17 That’s cool though.

00:33:18 I don’t remember though. That’s how I seem to remember it works though.

00:33:22 Okay, I’ve got an oddball question that has almost nothing to do with any of this, but you’re on the line. So I’ll give you a check to see if you know the answer sounds great. The Identifiers for parameter, actually, they show up if you pass in a string, it just shows up if they’re unique. And if anything is not unique, it like tacks on the 1234 whatever list, which is cool. The weirdness is if it’s an object, it just shows like object or something. Yeah, why not try to use either repair or string to list it? That’s a good question.

00:34:01 So there’s a couple of reasons why I would be hesitant against Repper and Stir. First is they often can raise errors as side effects of calling reper stir. But the other is a lot of the characters that are returned as parts of representation can be different or can be characters that are not allowed inside of a committee’s identifier. So things like square brackets, parentheses, and Poland, which have special meaning in the Python Identifiers. The other thing is you could imagine faulty implementation if double under Stir, which ends up leading to the same string value as another thing, which would make it really hard to differentiate those two items in a particular test. Okay, there is actually a patch a proposal right now to add special handling for Patlib, which kind of falls into the same question spaces like, do we allow other custom objects to build a particular string representation? And my leaning is currently towards no, because I think it makes it harder to understand what’s going on if two things end up being the same value, or we end up with characters that are not allowed. Or one other question that came up in that discussion was automatic translation of backslashes to forward slashes, because backslashes do really poorly inside of parameterized IDs, especially on Windows, where backslashes are much more prevalent in path names and stuff. There’s some outstanding discussion on what’s going to happen with that, but for now, there isn’t any special handling and Core for special objects like that. That said, you can write your own ID factories that take in those particular objects and build out automatically generated IDs from those. And so I don’t think there’s a plug in point for it. But you can do it on a per test basis.

00:35:42 Yeah.

00:35:42 The other thing is you can use pytestpram to give a hard coded string for your test ID, which I’ve been using a lot recently, and I think it makes parameterize a lot more readable for me, at least. Yeah.

00:35:56 I mean, the custom functions are great, but don’t have the same problem that you don’t know what the custom function is going to return.

00:36:02 Yes, you absolutely do. Okay. I think the difference there is between, like, a test writer explicitly opting into that custom behavior versus it automatically happening in Core.

00:36:12 Okay, great.

00:36:13 That way the bugs on the test writer, and.

00:36:18 It might be kind of cool to have let’s say I have a particular class that I’m using to bundle all my data to parameterize. If I also in that class, like define an ID function or something like that. That seems like that would be reasonable to call or something.

00:36:36 Oh, that would be cool. Like a double under pytest ID or something like that, where pytest knows to look for its particular hook on a user space object.

00:36:44 Yeah, I don’t know if anybody but me would use it, but it would be cool anyway. But it functions are great and they’re not too terrible to keep around. But you said what was it again? What you should not use in your Identifiers bracket.

00:37:00 Brackets, colons, parentheses, backslashes, non UTs, including bytes, null bytes.

00:37:07 There’s a whole bunch of stuff that just does not play nicely with the it. Also, depending on the platform, non ASCII characters are problematic. I’m currently fixing a bug on Windows Square.

00:37:21 I guess it’s CP 1252 encoded stuff is not allowed, but I’m fixing it so that any Utf eight will be valid there. But part of the problem is the Pi test. Test. Identifier gets written into the OS environment, so if it’s not encodable in your OS environment, then it’s problematic.

00:37:39 Okay, so try it. If it breaks things, don’t use it.

00:37:43 Yeah, pretty much. Pretty much, yeah. Python has some special handling for a few of the characters.

00:37:48 But how about dashes?

00:37:49 Most of the ones are safe.

00:37:51 Can you use dashes?

00:37:52 Yeah, dashes are fine.

00:37:53 Okay.

00:37:55 Because if I have two parameters, Pythons will separate them with a dash.

00:37:59 Yeah. So Pintust has some special handling. If an individual parameterizing has a dash, it will add another dash, if I recall correctly.

00:38:06 Okay.

00:38:07 They might also add a number. I forgot which one.

00:38:09 Okay.

00:38:09 But yeah, you can end up in a scenario where there used to be an ambiguity based on the dashes in parameterize. I think we fixed that.

00:38:17 I’ll try that.

00:38:18 There is also a case where empty strings are not handled properly as well.

00:38:22 That one’s been fixed also.

00:38:24 Okay, cool.

00:38:26 Yeah, it’s a surprisingly complex system and there’s a lot of weird edge cases and things that break it in subtle ways, but I think. Well, I mean, it’s software, so it’s always buggy, but I think most of the main bugs have been solved there.

00:38:39 But I mean, the idea of doing a custom Identifier is to try to make it simple for you to read anyway. So try to be clean about it anyway, I think.

00:38:49 Yeah, for sure.

00:38:51 Anyway, cool.

00:38:52 Also, the thing is, if you have stuff with spaces in it, it’s going to be harder to select it using dashk and such. So using Identifiers that are easier to match is often useful.

00:39:02 Well, yeah. And also if you’re trying to pass those parameters through tools like if you’re using PyCharm or Vs code, it gets a little tricky if you’re Identifier is not that parcel for that system anyway. Well, cool. We kind of got off on a tangent there at the end, but thanks for the information.

00:39:21 Yeah, no problem.

00:39:21 I kind of like markers, and I hope this is valuable for people.

00:39:24 Definitely.

00:39:25 Anyway, thanks.

00:39:26 Yeah. Thank you for having me on the show.

00:39:32 Thank you, Anthony. Always fun to have you on the show.

00:39:35 Thank you, Pie Charm for sponsoring the show.

00:39:36 Try PyCharm yourself at testincode.com, PyCharm. Thank you, Patreon supporters join them at testandcode.com. Support show notes for this episode are@testandcode.com one, four. Three. That’s all for now. Go ahead and test something.