pytest-testmon is a pytest plugin which selects and executes only tests you need to run. In this episode, I talk with testmon creator Tibor Arpas about testmon, about it’s use and how it works.


Transcript for episode 98 of the Test & Code Podcast

This transcript starts as an auto generated transcript.
PRs welcome if you want to help fix any errors.


00:00:00 By Test. Test Monitor is a Bytes plugin which selects and executes only tests you need to run by collecting dependencies between tests and all executed code and comparing the dependencies against changes. Run a test suite, make a change, then only run the that are affected by your change. That’s so cool. In this episode, we talked to Tesmon creator TBOR Arpots about the use of Tesla and how it works.

00:00:25 Thank you to Patreon supporters for your continued support in the show. And thank you to Pie Chart for sponsoring this episode. Listen to their spot later in the show.

00:00:47 Welcome to Test and Code podcast about software development, software testing, and Python.

00:00:57 Welcome to Test and Code. I am actually really excited to have Tbore ARPAS. Now I got it wrong again. I just practiced this Tibor ARPAS. Is that right? No.

00:01:08 Yeah, it’s right.

00:01:09 Okay.

00:01:10 And we’re going to talk about test coverage and a tool that he’s created called Testimony.

00:01:17 But before we get into it, first, welcome to the show. And could you tell me a little bit about you?

00:01:24 Hello, Brian. Thanks for having me.

00:01:26 Yeah. So I’ve been developer programmer for 20 years. I started to freelance during my studies, and then it gradually got like then at some point I had an opportunity in about 2007 to pick my technologies finally. So before I was developing in Java and parallel and stuff like that.

00:01:54 But in 2007, I could pick the technology. I had the Liberty.

00:02:00 I made the research and picked Python because it looked like the best option. It wasn’t well, so obvious then at that point. Right. But it turned out quite well.

00:02:14 That’s when I started with Python. And also, basically what I do now is mostly outsourcing Python developers. There is twelve of us for one or two big customers.

00:02:31 So that’s how I spend my time, mostly managing.

00:02:38 But I still can find some time for developing, unfortunately.

00:02:43 Okay, cool. So you’re doing the developer and manager at the same time sort of stuff also.

00:02:49 Yeah.

00:02:50 Cool. Well, that’s actually pretty exciting.

00:02:56 And you’re not in the US, right? You’re in somewhere else.

00:03:00 Yeah. I’m from Slovakia and I’m in Slovakia. That’s also where the company is. And we work for remote customers.

00:03:08 Unfortunately, the world is small.

00:03:11 Well, it is. And also just a side tangent. One of the reasons why I started this podcast was to be able to get to expand the people I know and think of as people I’ve met through doing this. And it’s pretty cool to meet people from Slovakia and from all over the world.

00:03:31 Sometimes scheduling is a little difficult, but hopefully.

00:03:36 So it’s noon in the middle of the day right now. What time is it there?

00:03:40 It’s 09:00 p.m.. 09:00 p.m..

00:03:42 Okay. So we’ll try not to keep you up too late. Luckily, it’s a shorter podcast.

00:03:47 But you said you started Python in around 2007.

00:03:55 We’re going to talk about Testimony, which is it works in conjunction with pytest. Is that correct?

00:04:01 Yes, correct.

00:04:02 So how did you start getting in?

00:04:05 When did you start using pytest?

00:04:10 Well, I think when I was writing Testmont, actually the first version, because on the project which motivated me to start writing Test Mom, and that was in 2015, I think of 2014.

00:04:25 We used unit test at that point.

00:04:28 But when I started writing tests, I wasn’t anymore on the project which had a long pursuit and which motivated me to think about the problem and try to solve it. So I again made the research and then picked what seemed like the best perspective or the tool which seems to have the biggest adoption.

00:04:53 So I started pilot actually.

00:04:56 Nice. That was even pre 2017. So you didn’t have my book to help you?

00:05:02 Yeah.

00:05:04 Okay. We’ve been sort of beating around the Bush. Testmon.

00:05:07 What is testimon tests which have been influenced by recent changes.

00:05:12 So it creates like a metrics of dependency metrics of test and code and functions in the source code and then watches the changes or actually stores check sums of the method. And then whenever you change something it can select only the test which could have been influenced by the change.

00:05:38 I run a test suite, everything’s good. And then I changed some of my code and then I can just rerun the test that could even possibly fail.

00:05:48 Right. Because they’re the only ones that are affected by my code. This seems like magic.

00:05:54 How on Earth do you do this?

00:05:56 Well, I also like the idea and I was thinking about how to do it and the most deterministic way to do it seemed to be to get coverage, to collect coverage and collect or store the lines which have been executed.

00:06:21 Right. So coverage P, that’s the library and the tool which we use underneath Testimony, or it collects all the lines which were executed.

00:06:42 I simplified the information a little bit and only actually store or deal with the functions and methods which have been executed.

00:06:54 But yeah, that seems like the reliable way to find out if something could have been influenced or not. Of course it doesn’t take into account any changes in external APIs or in data files or stuff like that. So that’s a limitation, a fundamental limitation of testimony.

00:07:19 But yeah, I think it’s a very good practice actually to also create the fixtures in the code because they are easier to migrate and refactor and so on than data files, for example.

00:07:32 So yeah, I think it’s a limitation which one can live with pretty easily.

00:07:37 So this is primarily if I’m getting this right. The goal is so that if I modify my code under Test, the tests that could be affected get run.

00:07:53 What if I modify like test helper functions?

00:07:57 Is Testman go into play there or is that completely separate problem?

00:08:03 It also goes there.

00:08:05 Testimony rounds the coverage or collects the coverage from the tests themselves. If you have any function there or even fixture, then it takes it as a function which was executed and then if it’s changed on the next round then it marks the test as unstable or needed to be executed.

00:08:30 That’s pretty cool because I do have quite a bit of code in my fixtures.

00:08:34 Thank you Pie Chart for sponsoring this episode. The pytest support alone is enough to drive pie PyCharm. The testing support is in both PyCharm community and PyCharm Pro and it’s the best in the industry. But don’t just stop there. The Pro features really do boost your productivity. The database support is amazing. I use it to look at remote Postgres databases and I’m looking forward to trying the MongoDB support as well. Pro also gives you profiling and coverage integration and extra help when working with web frameworks. Definitely worth a try. Try PyCharm Pro for four months by going to testandcode.com PyCharm.

00:09:13 What’s the purpose?

00:09:15 I’m guessing if you’ve got fairly long, whatever long means to you test suites, you can run a shorter subset.

00:09:25 Yes, exactly.

00:09:27 As I mentioned previously, my motivation was a long and that was six minutes for me. That’s long, right? It interrupts with the workflow and you have to usually start doing something else and you have to wait for it. So when it’s something which we are not used to anymore to wait for six minutes for the computer. Right. The computer in general got so quick in recent years that we are not used to that. So I was thinking about how to solve this.

00:10:02 Well, you can make a change which actually reaches everywhere, right? If you change a setting, a global setting, then it’s something which influences everything and then you probably have to run the whole test suit anyway, even under testimony.

00:10:24 But sometimes or very many times you have changed which is very local, and then it doesn’t matter how big your project is, it really only matters how far or how many tests does the change reach. So that can save you a lot of time if you have a really huge project.

00:10:44 Yeah.

00:10:45 Are you using it yourself? Do you see quite a bit of speed up?

00:10:49 Yeah, I’m using it myself, which is a challenge. Also, I would have liked to use it on Tesmon, but since coverage itself is not preentrant or how do you call it? It’s not easy to run it twice then that makes like a complication for me. But recently we realized the way how to do it, so I started to use it also when developing Testimony, which is a big help because I do not have that much time to work on any other projects which would be relevant.

00:11:32 But now with this change, I became the user of Testmont finally.

00:11:38 Also, yeah, I noticed that you’re extending it and it’s living on. So I think we highlighted on one of the Python bytes some of the new changes so you’ve added some fun stuff like quickest test first and things like that to try to try to find basically if there’s something that’s going to fail to try to find that as quick as possible.

00:12:06 Yeah, exactly.

00:12:08 That’s nice. And then coverage has changed some recently. So coverage added. I think it added the ability to find out which test has run, which line of code or something.

00:12:22 Is that influenced testimon or does that change happened after the algorithm for tesmon was developed?

00:12:31 Well, it changed only recently or was added only recently. So test mod predates this functionality and coverage and it actually makes it a little bit more difficult or my life a little bit more complicated because the design goals of the coverage functionality are quite different, I think even though it sounds like it should do the same. But coverage does it, for example, for reporting mainly and maybe creating some HTML reports. And also the data format changed. But in Testimony, in the recent improvements I concentrated on being able to or keeping the database of testimony consistent all the time. So even if you interrupt the run, it still will work the next time it will pick up where you left off.

00:13:33 So this is one thing also another thing is I wanted it to be very quick, so it takes a couple of hundred milliseconds of milliseconds, even on a very big code base, which I don’t think the storage format of coverage pile would be able to do.

00:13:51 So there is a lot of optimization like the table is in the details all really the design goes are a little bit different. So we are not compatible with coverage 50 yet, unfortunately. Okay.

00:14:08 But I have the idea how to make it work. And also for the future, I can imagine that we could read the coverage pie who tests what or context actually have called their information and use it, or transform it into a Testman database or index it in some clever way. But it’s not like straightforward. It’s not that I could throw away out of Tesla now, unfortunately.

00:14:41 Right. Okay. So that’s one caveat then is if somebody’s using Tesla on a project, it’s not compatible with the newest coverage.

00:14:51 Yeah.

00:14:52 Okay.

00:14:55 It’s not a deal breaker.

00:14:58 My use of coverage within a project, I guess it varies person to person, of course, but my usage is often that I have it as a talks job, so there wouldn’t be any conflict.

00:15:13 For instance, if I really wanted to use coverage five in a different talks target and use Testimony and a previous coverage for my development. I don’t see any conflict in that it lives side by side.

00:15:30 Yeah, but we will fix it probably. It’s not going to be that easy to fix it for the context of functionality, but I think people are not using it that much. But if you just don’t care what coverage version you have and upgrade to five will be able to be compatible with it.

00:15:51 Okay, so that’s on the horizon. Is there any other changes other than that that you were looking forward to adding?

00:16:02 Like, when I get feedback from the people using it on the project, I sometimes quite often actually hear that on the long the big project with long test suit, people stop running the whole test suit. Locally, they rely on the CI server to do that. So if they are changing something, they manually type the tests or test class, which should be executed to make sure that they didn’t break anything, or to make sure that their code, which they are just changing, get executed and can fix the basic thing, but then commit and wait for the result.

00:16:49 What happens? Because it’s so inconvenient to wait for a long time and have the machine to process something then that they rely on this.

00:17:02 So this is a big barrier because when you never run the test mode locally, you never have the database and you can never use it, even if you have maybe a strong need to use it for a while, then if your test takes 20 minutes after and you do not have 20 minutes, then you anyway commit to the CI and wait for the whole thing. So I hope to add a component which would aggregate the information on the server. So probably it would be executed on the CI server. So with every run the database would be updated and whenever you change anything in the code base, then you can download the tool Testman automatically downloads the database, and then just gives you the subset of tests.

00:18:01 Oh wow, that’d be pretty cool.

00:18:03 Well, also another use for this functionality would be that another piece of feedback which I got is that if you change branches very often, which might be the case on large projects for many people, then the changes are so significant that you do not get that much of a benefit from test month that it loses the science. But if for every commit there is a database up there and which you can quickly download and this problem goes away.

00:18:39 That’S an interesting problem of switching back and forth between branches. You clearly are changing a lot of files, but not really locally. It looks like you’ve changed a lot.

00:18:52 Well, it might be that significant that it doesn’t bring the benefit that much. Also, maybe what should be really mentioned is that there is some overhead. Of course, coverage itself has overhead on typical projects, which I tried, it’s about 2020, 5% in computational intensive projects. It might be more, but yeah, this is the typical overhead. So this is like minimum obstacle which testimony has to work with.

00:19:24 So the whole test run is 25% slower than without this loan not using it currently.

00:19:36 But I’m excited to use it on some future projects.

00:19:40 Maybe. I have a question. I think you work with your testing actually C Code in your job, that’s a lot of what you do.

00:19:51 Yes.

00:19:54 Okay, yes. But I’m thinking even with test development being able to do use coverage of the test suite itself. So while I’m modifying a fixture or something, I can run the tests that are affected by helper functions or fixtures.

00:20:16 Yeah, that’s a good but no, that’s another limitation, of course, that it doesn’t take into account coverage of the C Plus Plus code, so it wouldn’t pick up the changes in the product on the test, unfortunately.

00:20:35 Yeah. In the environment that I’m testing in, I don’t keep access to the source code anyway because the C Plus Plus code is running on a different machine, so it’s a remote access. Is there anything that we didn’t cover about this problem space or about Tesla on that you’d like to highlight?

00:20:56 Yeah, I would maybe like to mention that I think it’s good also to start using Testman on a smaller project because it selects automatically the relevant tests and also a little bit randomizes the order in which it executes the test and also sometimes runs only one test and so on. So that allows you to spot when you introduce dependency in your test, unwanted hidden dependency. Like if you maybe didn’t clean up properly in previous tests or didn’t make the whole complete fixture correctly and rely on something from previous tests, then this makes you spot it really quickly because it helps you or select for you the tests which are relevant for recent changes.

00:22:00 So that’s a very good thing to have. Independent and isolated tests are something which is very important when your project is going to grow, then you would probably like to run the test in parallel and maybe on different machines. And if you do not have and also using testimony, and if you don’t have that, then it’s going to be a mess. It’s going to be non deterministic results. So that’s one thing why I think it’s cool to use testimony since the beginning.

00:22:36 Yeah. And I also see that you have at least there’s a blog post up on you have Testmont.org. Is that you?

00:22:45 Yes.

00:22:46 There’s an article on how to integrate this with PyCharm as well.

00:22:52 Is that still worth?

00:22:54 Yes, it’s a general article, so it’s not about testimon.

00:23:01 Okay.

00:23:02 But yeah, we had the experience that some people didn’t know how to set it up.

00:23:09 So we brought a little bit like a description and the guide as a piece of information. There is actually one thing which in this area in this problem space, and again, it works independently from test one so far. But we’re working on a little plugin to PyCharm which can display exceptions in the editor itself.

00:23:46 I showed it on Twitter. I think you also reacted. I think you saw the screenshots.

00:23:53 But yeah, this is one thing which I think is a nice tool and I would like people to use it and we might integrate this with Testimony later because at this point it’s independent. But I would like the testmont database to be used for storing the exceptions and then displaying in PyCharm in real time. And it’s the same it’s actually the same theme, I would say, because I like interactivity in development and in running the test. Again, this was another thing which was annoying that I was in PyCharm writing something and executing the test and then either executed the test in console and had to parse myself in my head the stack trace and try to find the lines or use the pipeline test runner.

00:24:53 But again, you would have to click through maybe a lot of tests and a lot of levels and whenever you run the test again, then it disappears.

00:25:10 What, you expand it? Which? No, to expand it. So I didn’t like the pie chart that run it that much. But anyway, this is like an addition.

00:25:19 It shows the exception where I think it’s like the most interesting and feels the best.

00:25:26 Okay. And is this plug in something that I can access or is it still.

00:25:30 Yes, it’s called runtime info and you can look it up in Pie Charm, Plug in Marketplace or Plug in plugin repository from PyCharm.

00:25:43 I’ll try to find a link to it to keep putting the show notes then.

00:25:47 Yeah, that would be cool.

00:25:48 Cool. Well, also, I got to thank you because I like people that see a problem just even for themselves and work on making it an open source project so that other people can benefit too.

00:26:06 But like both of these projects that you worked on, it’s something you saw a need for and open sourced it. And it’s not free.

00:26:17 Like you said, you have limited time to support it.

00:26:20 I’m sure quite a few people using it.

00:26:26 Thanks.

00:26:28 How much time do you spend maintaining Tesla?

00:26:31 Well, there is really like waves when I was writing it, I don’t know, actually, I had to learn a lot because as I mentioned, I wasn’t familiar with Pythons at all. So it was many months or many weeks if I counted full time, and then for a couple of years it was just maybe a couple of hours a month or not even that much. And then recently I had another like a go edit and rewrote significant parts, and that was almost like a full time.

00:27:13 So it’s not at this point. It’s not like a burden that there would be a lot of requests to react to.

00:27:22 So I pretty much work on it at my own pace and so on. It’s not that much effort to maintain it.

00:27:33 Okay. But still not zero. And I appreciate it.

00:27:38 Yeah, it looks like there’s quite a bit of activity. 364 stars. Pretty cool.

00:27:44 Yeah. Well, that was a motivation to do the rewrite and to improve the things which I thought were lacking. And in the past I didn’t really well, didn’t really know what I was doing, I would say. Right because I was also new to coverage, using it as a library.

00:28:08 I think the code base right now is much better. It’s much nicer and I hope people will use it. And even though the quality wasn’t that good years by years, there were stars appearing in GitHub. So at some point I saw so people are probably using it so that motivates me to improve it.

00:28:38 It made you a better developer in the process as well.

00:28:41 Well, definitely. Of course, I’ve been using or I’ve been developing maybe some enterprise software where you don’t have to think that much about the algorithms and the underlying stuff or level or Python itself. Actually, it’s about different things in this enterprise environment, but it’s really challenging and made me much, much better developer.

00:29:11 Well, you jumped into the deep end. Most people don’t start their involvement with pytest by writing a plugin.

00:29:18 Yeah. Kudos.

00:29:23 Anyway, thanks for coming on and talking about this.

00:29:28 And if people want to know more, I mentioned previously that there’s a website, Tessmond.org, is that the best place for people to learn or do you have any other suggestions?

00:29:39 Yes, that’s the best place. And there will also be an email address, maybe a Twitter also there so people can reach out and let me know. I would like people really to let me know what doesn’t work for them or what would have to happen so that it works for them.

00:29:57 So it would be cool if people reach out.

00:30:00 Yeah. Wonderful. Well, thanks again and we’ll keep up with you and talk to you later.

00:30:05 Yeah. Thank you very much, Brian.

00:30:07 Thank you, Timor. Very cool plug in and a wonderful interview. Thank you to Patreon supporters, help support the show yourself with a buck or two at testaco.com support.

00:30:20 Thank you to all of the amazing people that hang out in the Python Testing Slack channel. You help people every day and it’s super awesome. Join the fun at testandcode.com slack and thank you to Pyramid for sponsoring this episode. The link for the extended profile is at testinggo.com PyCharm. That link is also in our show notes@testinggo.com 98 follow the show on Twitter at Test And Code and follow me at Brian Akin.

00:30:50 That’s also the best way to reach me for episode suggestions, interview requests and offering to come on the show yourself.

00:30:58 That’s all for now. Now go out and test something.