Interview with Dave Hunt @davehunt82. We Cover: Selenium Driver: http://www.seleniumhq.org/ pytest: http://docs.pytest.org/ pytest plugins: pytest-selenium: http://pytest-selenium.readthedocs.io/ pytest-html: https://pypi.python.


Transcript for episode 25 of the Test & Code Podcast

This transcript starts as an auto generated transcript.
PRs welcome if you want to help fix any errors.


This is Test and Code, episode 25. I’m your host Brian Aucken, and on this episode we have Dave Hunt. We talk about testing web interfaces with Selenium. We talk about pytest and several plugins he maintains, including pytest, Selenium, Pytest, HTML, and Pytest variables. We also talk about Talks X, Fail philosophy, working in Mozilla and testing practices at both Mozilla and in the Selenium project, and a whole lot more. Today’s podcast is supported by Patreon supporters. Visit Pythontesting. Net support to find out how you can help get more shows on the air and help me pay for services like audio editing and transcripts. I also have another supporter that I need to thank, and that’s Michael Kennedy from the podcast Talk Python to me and his new podcast, Python Bites. If you haven’t checked it out yet, please do. Michael has been a supporter of this show, both through Patreon and just his moral and technical support from the very beginning of the show, and he’s helping me to come up with ways for me to get more podcasts out faster. And for that reason, I would like to point out some stuff that he’s got to offer.

I want to highlight the courses that he has provided at Training Talkpython. Fm.

There are three courses there, and I think that they’re very appropriate for many of you listening to this show. I know there are a lot of you that are great testers and great programmers, but sometimes feel like perhaps you jumped into Python headfirst without taking the time to learn some of the core principles of good Python programming. If you’re in this boat, I think the first two courses are just right. And if you feel pretty comfortable with your Python but want to take it to the next level, the third might be just the sweet spot. So the first one is Python jumpstart by building ten apps, although he does start out at the beginning of Python.

No matter where you’re at, I think you’ll learn something from this. The second is write Pythonic code like a seasoned developer. I love this course, especially like this course for people that feel real comfortable in other languages and want to stop thinking in C or stop thinking in Java or whatever they came from and start thinking like a Python developer. This course is for you and the last is Python for entrepreneurs. However, I don’t think you need to be an entrepreneur to benefit from this. Even covers things like pyramid development and continuous integration. So go take a look at Training Talkpython. Fm. Click on the courses and look at the list of topics covered. Power up your Python skills at Training Talkpython. Fm.

Now on with the interview.

Welcome to Test and Code, a podcast about software development and software testing.

For people that don’t know who you are. Could you introduce yourself?

Sure. Yeah. So my name is Dave. I work for Mozilla. I’m an automation engineer and that basically means that I work on the tools that we use to write our automated tests. I mostly work on UI automation, which basically means anything that a user can click on or interact with. So it’s a very high level.

Some people would call it end to end testing because it sort of tests the full stack. Yes, that’s what I do for Mozilla. I’ve been at Mozilla now for just over six years, and I’m also a contributor to the Selenium Open Source project, which I’ve been doing for since about 2009, so about seven years.

Selenium is a UI testing framework for the web browser.

Okay, so the testing you do for Mozilla is that all through Selenium or just some of the tests go through a different way to get that code?

Most of what I do at the moment is through Selenium.

Mozilla do have a number of different projects, and some of those slim isn’t appropriate.

For example, we have UI testing for Firefox itself as our flagship product, but those tests historically have used a number of different tools and technologies.

There’s a possibility in the future they’ll be able to use Selenium.

So not everything is using Selenium, but most of what I do at the moment is what is Selenium. So Selenium is a suite of tools which basically the goal is to be able to simulate a user interacting with a browser.

This started with sort of executing JavaScript within the browser, and there was a Firefox add on called Theme IDE, which later sort of record and playback interactions. So notice when you’re clicking something, it would have a way of locating that thing that you’re clicking on and record that as a test action. And then once you’ve recorded your test, you’d be able to play it back and you’d see it running.

It’s developed into the ability to run these tests remotely with a Selenium server, and that allows you then to write these tests in many different programming languages. So they’re basically client bindings for Java, Python, Ruby, there’s a number of them for JavaScript. I believe there are some for PHP as well.

Does it have to be driven through something like Python or Java?

Yeah, it needs to be. Well, I suppose if you use the idea, which is the Firefox add on, you don’t need any client bindings, because really that’s just recording your actions and playing them back. The format that typically records in is a table based format in the format of command, and then two arguments which allows you to do things like verifying expected values, but it’s very limiting. And so you get the full power of Selenium. When you do use a proper client language where you can then use the whatever you’re able to do in that language, you’ll be able to apply to your test. So if that means maybe flow control or objectoriented abstractions, all that sort of thing, the true power really is when you use one of those client bindings?

Do you maintain the pytest selenium?

So I’ve been using Python since I joined Mozilla. Before that, I was involved more in the Java side of things in doing so. Joining Mozilla, being introduced to Python, being introduced to pytest.

We’ve built up a number of tools that help us to run our test and code. Of those is a pytest plugin called pytestolenium, which really just allows us to simplify how we launch and maintain a browser instance.

I also contribute to the Selenium project itself, and most recently I’ve been contributing to the person client bindings, but that’s only been fairly recent.

You don’t have to use by test, right. You can use unit test or something for controlling it.

Yeah.

So the Python client bindings are agnostic to the test framework that you choose to use.

So, yeah, you could use unit tests. We actually used nodes for a little while. We switched to Pi test. The main reason we switched to Pi test was we wanted to be able to run tests in parallel, and we also wanted to be able to generate an X unit style report.

And we had problems doing that in Nos.

Basically, the report would be all messed up and there was an open issue for it didn’t look like it was actually getting any attention. And we found that pytest did that for us and switched to Pi test. And really in many ways haven’t looked back.

So most of the testing you do right now, is that in pytest? Yeah, actually, I’m looking at the page right now. There’s quite a bit of goodies inside the what am I looking at? The PYT Selenium project at the bottom of the list, I see that there’s HTML reports. Is that different than the pytest HTML plugin?

So a little bit of history, really, probably about four or five years ago now. I’ve been working at Visitor for a while, familiarizing myself with pyteston, and we had a lot of duplicated code between our repositories. So we had a repository for each one of our projects, and quite often we’d have a separate repository which included the UI test for that project.

And so using Pi test, we had a conference, Pi, which is essentially a local plug in which just instantiated the name object for us and did some other stuff that was useful. But we were finding we have the same comp test Pi in a lot of our different repositories.

We enhanced one of them. The others were not necessarily getting that the advantages of whatever the enhancement might be.

And so I created a plugin that did all of the things that we wanted to do, and it was providing us with a Selenium instance was one of them. Generating a HTML report was another, and the other main one was injecting variables, in our case, credentials for the tests to use. Because some of our tests would require setting up an account, first of all, and then running the test with those credentials. And so we had one plugin that did all these things because it did so many things. I’m terrible at naming things anyway, but I just called it Moz Webqa Mozilla Web QA plugin.

Then fast forward four years or so, I’ve learned a lot more about pytest, a lot more about plugins. I’ve learned a lot about the fixtures at this point as well, which I think had been introduced in the interim. And so I decided to take this plug in and break it apart and just have single plugins for the various different features. So one of those was pytest HTML for the HTML report.

pytest variables was what we then used for injecting these credentials into our tests. And pytest Selenium was the main one. So, yeah, they all work together, but they can be used independently.

I’m going to have to check out the pipes variables. That sounds neat. Also, you’ve been at Mozilla for about six years, you said.

Yeah, just over. Yeah.

When did you create these plugins and stuff? How old are these?

I joined Mozilla in 2010. I think that Moz Web QA plugin was introduced. I want to say August 2011, perhaps, maybe 2012 even. And then, yeah, everything was just stable, working fine with those. I had an increasingly nagging feeling that we should do more with those, but I changed team. I was originally working partly on Firefox automation, and that wasn’t necessarily UI, and I was also working partly in the Web automation.

But then Ms. La had a lot of focus on a new project, which was our mobile operating system, Firefox OS.

So for a while I was working on that team and I was doing similar kind of thing, working in UI automation, but I was using different tools. And so for a long time, those plugins or that main plugin was just really in maintenance mode. And I would jump in and I would fix something if it was necessary, necessary to be fixed. And then a couple of years ago, I rejoined the Web automation team, the Web QA team, and I had this thing that I really wanted to just come back and TID it all up. And so at that point, probably approaching two years now, I broke those out and created separate plugins.

Okay, well, these are really pretty cool. And I do notice that you can use it to launch not only Firefox, but Chrome and Internet Explorer, and just really anything that’s pretty cool.

It basically just allows you to do whatever you can do with Selenium, but surface some of those things out on the command line and for convenience, allows you to configure some of them inside your setup CFG or wherever your configuration is stored. For people who are just using Selenium in one project, you can just create your own fixtures that instantiate Selenium object. But for us, we had very much very similar requirements on several projects. And so just having it centralized was useful. It’s hard to know how many people are using your plugins, but it seems like even when it was the Maze QA, this one big plugin that did many things.

We were even getting people giving feedback and raising issues. So people were using it, were finding it valuable. So, yeah, I think that’s the case also with by Tesla Inn.

One of the things that you sent in your list was that you’re using talks as well.

We just started using talks, really. I was only really introduced to this year, well, at the pytest developer Sprint this summer. And that’s when I spent a little bit of time getting familiar with talks, and I’ve set it up for each of my plugins.

It allows me really easily to test and multiple Python versions. That’s the biggest value I get out of talks. And so I’ve started introducing it to some of the projects at Mozilla, and I’ve introduced it to the Selenium project itself for running their Python tests.

You described this briefly as a shake up. Was their resistance to your changes, or how has that gone over?

I was referring there mainly to the changes to the Selenium project itself. And it was a shake up, really, because it was a big change. There’s no resistance.

There’s very little resistance at all in this project. There’s a lot of if you’ve got the commit access, really you’re trusted to do what’s right for the project. This came about because I wanted to make some changes and some improvements to the Python client for Selenium.

But at the time, the tests were very difficult to really understand how to run or to debug.

And also what was possibly even worse, they weren’t being run regularly on any kind of CI system.

So it would be very easy to commit something you think would be an enhancement and improvement. There are actually regressed behavior, and you wouldn’t know about that necessarily until somebody raised an issue.

Even if there was a test that would have, would have failed. So I wanted to first address this before I got too carried away with making enhancements to the Python client.

The test as they were. They were unit tests, but they were being run using pytest, but they were being built as part of the build system.

There was essentially a template file, and the build system was rake, and it was basically taking this template and creating multiple suites. So there are several different browsers Chrome, Firefox, Safari, Phantom, JS, and there’s also a remote. And so for each one of those browsers, a copy of the test suite was being written to disk. And then after that was all done, the tests were executed. So it meant that you couldn’t just say pytest run this test, because that didn’t exist yet. You needed to always use the build system to generate those tests. And then you could run them.

What I did is I threw away a lot of the Rake template system and I just made all of the tests pure pytest and introduced fixtures for parameterized, fixtures for supporting the multiple drivers. So the drivers being the different browsers.

Now we have this common suite of tests that you can run directly.

There’s no template parts to those files, they just by tests and it takes on the command line a driver, and then one or more drivers, I should say. And then for each one of those drivers it will run those tests.

Now are you driving this from some continuous integration server?

Yes. So that was another aspect of what I wanted to do here is I wanted to get the test much more easily, maintainable and run about locally. But I also wanted to make sure that on every commit, every push or every pull request, we run these tests. So Selenium was already using Travis, CI and Jenkins. What I did is I made sure that once I had this hooked up using pytest and using Targs, I modified the Travis configuration to just be a multi build or multi job matrix and make sure that it was running the Python tests against each of those browsers, each of the ones that’s possible in Travis. For example, we can’t run the Safari tests, the Internet Explorer Edge tests on Travis. And so yeah, the situation we have now, every time the poor request has opened, every time a commit is pushed, the tests would all run.

Now one of the things that I’ve noticed with actually all the continuous integration servers is that the plotting and graphic of the results is mostly based on past fail. I think maybe some of them do skip an error also, but pipe test has express and ex fail. Do you utilize those in either Selenium or Mozilla testing?

We do in both. So yeah, I’m a bit opinionated about the X fails and I think it differs from some other people’s opinions. So for an X fail, I am of the opinion that that test is always expected to fail. Perhaps you’ve written it before you’ve written the functionality, or there is a bug that is reproducible 100% of the time. You can exit that test then and put a reason there is a reference to that bug and then if that test starts passing, I actually want that to cause the suite to fail because something’s changed here. However, the other opinion or another opinion is that you can use it for marking tests, flaky and fail sometimes and when it failed is not strict, which is unfortunately in my opinion the default. It means that those tests could start passing and they would still just be ignored. I mean, you have the coverage back, but you don’t have that reporting back to you. So yes, we use it for Mozilla and I try to encourage that kind of use and behavior.

Selenium has used it, but has used it slightly differently still, and I think that was a misconception misunderstanding of the feature of X fails. That is, it’s possible to decorate a test, Mark it as an expected failure, in which case, if it does encounter failure, is marked as X failed. But it’s also possible to have an imperative X file within the test, and if that’s encountered, the test will always exit with that as its outcome. X fail, and it means that test could never expire, and that’s what was used in Selenium. And so I’ve removed all of those and I’ve made them all sort of decorated X fails, and it just then allows these tests potentially at some point in the future to EXP pass, and we can then remove the X fail and tighten it up. But when you were saying about the CI integration, yeah, you do just give a pass or a fail of the entire run. And that doesn’t necessarily go into the details of which tests failed or which tests passed, which tests had these other outcomes. Best I’ve seen that deals with that is Circle CI, which allows you to publish artifacts. In fact, Travis does allow you to publish artifacts as well, but it’s not surfaced so well on the UI. And when you can publish an artifact, if you can publish the HTML report that includes all of your different test outcomes.

I was curious about how you did that. So you use the pytesthml and publish that as an artifact with them so that when you look at details of a particular test run, you can see those.

Yeah, because the X Unit style report really doesn’t. I mean, you can have a pass, fail or skip. And I think X fails are marked or skipped. I don’t know whether or not X passes are marked or skipped, but I think that must just go back to the origins of X Unit style tests not having these other potential outcomes.

There’s a mapping between what the output of a test is in pytest and what you can put into the J unit XML.

So I’m actually kind of annoyed that we’re still using the XML style from Java to run our Python tests, but that’s the standard right now. You work at Mozilla. Where are you located? You’re in England somewhere?

I am, yeah. I’m in the Southeast of England and Kent.

I work from home, so I’m a remote employee. There are a lot of us at Mozilla. It’s not a requirement that you live near an office.

Really. Working for an organization that lives on open source means that if you have an Internet connection, you can be productive and you can work. So I mostly work from home. There is a London office. I occasionally travel into that, but it’s an hour and a half at least journey each way, and I’m quite happy and probably more productive if I’m working from home, but occasionally I have a trip out to California in Mountain View. That’s where our head office is.

And then twice a year the whole company gets together for an all hands twice a year.

That’s cool. You get to rack up the freaking flyer miles I have in the past.

Yeah. More recently so when I joined Mozilla, I didn’t have any children. I’ve now got two little boys. And so more recently I’ve perhaps favored to not travel so much, but it’s a nice thing to be able to go and see new places.

And yeah, I love traveling to conferences. In fact, a highlight for me this year was going to Fry Burke in Germany for the pytest developer Sprint.

It was amazing to meet such awesome people working on an open source project, contributing their valuable time.

Yeah, I would have loved to have been there. I know that. I just talked with Raphael Pierzina and he was just talking about how awesome that Sprint was.

I also like how you guys did the crowdfunding to try to get plane tickets for people that couldn’t get them themselves.

Yeah, it was a great idea. In fact, I was mentoring somebody during the summer to work on some enhancements for the pytesthemo plugin. It just so happened her internship started just before Fryberg and the Sprint. And so I suggested that she reached out and see whether or not some of the funding can support her traveling out there. And it did. So it was great. She was able to start her internship working on a Pipeline plugin, but surrounded by Python’s core contributors.

You work remotely. What is the main interaction you have with your coworkers is that through email or domain would be IRC.

But yeah, it’s email, IRC and video conferencing. I’m constantly on IRC. Pretty much the entire organization and many of our contributors are on IRC.

That’s where I live. That’s effectively. I see that as if I’m logged into IRC. That’s the equivalent of someone seeing that I’m sitting at my desk.

The pytesthemo plugin that I assume does talk about X fails and X passes within it.

It does, yeah. So any outcome that you can generate from pytest will be included in the HTML report by default. There’s all of the main outcomes Pass, Fail, Error, Excel, Express and Skip.

There is support in there for another plug in, which is pytest rerun failures, which changes the outcome on a failure to rerun and then reruns the test. And so the HTML report will also show reruns and it can be enhanced for additional outcomes if they happen to be others that get added.

Okay, rerun. So the outcome is a rerun and then you can rerun.

So prices, rerun Failures was written by a Mazilla contributor and basically it’s a plug in. I try to recommend people never use, but it’s very valuable in those cases where you absolutely have to use it. So if you have a situation where you have tests that are flaky due to environmental issues or something that is basically would be very costly for you to get to the bottom of, and you know that it will pass next time and it’s not really causing your problem. Then you can use pytosis rerun failures to either individually Mark tests as known to be flaky, or you can just say blanket rerun. So on the command line you can say rerun it up to five times, for example.

And what it will do is if it encounters a failure on one of those marked tests or any test, if it’s done on the command line, then it will automatically rerun that test and it will register an outcome of rerun. And then on the final attempt it will record an outcome of whatever the outcome was. So it will still be failed. It will still allow test to ultimately fail, hopefully pass.

Yeah, okay. I can see the danger there.

We have some environments that were particularly troublesome, and the problem is someone had put on a blanket rerun on our Jenkins infrastructure for pretty much all of our tests, and it was fine. But that blanket rerun wasn’t necessarily enough in one instance, so it was bumped up and then it was bumped up. So in the end we have five automatic reruns. When you’re talking about unit tests, that’s probably not too much of a problem, although I always prefer to either fix the test, fix the intermittency. But the problem with UI tests and Selenium tests is quite often they take quite a long time to run.

We would have instances where a test would time out looking for an element or waiting for an element to be displayed on the screen. And if that was the case, our time out might have been 10 seconds and would be automatically rerunning that test five times. So we’ve now added almost a minute to our test suite run time.

However, our timeouts often weren’t 10 seconds. They used to be configurable on the command line. I’ve removed that now, but at one point I saw timeouts of five minutes being set. And so you had a test that would time out after five minutes and then be automatically rerun five times.

And if this was a common area of functionality, perhaps login and 80% of your tests needed to log in, you suddenly have a test suite that takes hours to fail for a genuine reason. And so that was a problem. And we’ve addressed that mostly by not relying on the rent failures plugin. And I haven’t so much hard coded the timeouts to 10 seconds, but they’re a lot harder to increase.

Switching gears just slightly, I’m looking at the pytest. Html page.

Of course, I’m not always good about this, but I’ll put a link to all the things that we’ve talked about in the show Notes. There’s a screenshot where it shows it looks like there’s a screenshot of the website that you’re testing.

Does that happen automatically or is that something you can set up that you should screenshot on failure or I have to try and remember how we implemented it.

But I’m pretty sure that what we did is in participation is we implement custom hooks when we are generating the reports, and that allows you then to in your own suite. You can have a local plug in using Comforts Pi, which implements this hook and adds additional data to the report. If you’re using pytest Selenium, we already do that. So by default on failure, we’ll gather additional debug information, such as we grab the HTML page source, we grab a screenshot if we can, we grab the logs from Selenium as well and they get output in the report. But yes, simply it’s extensible. So if you’re using Psych, you have some other information you’d like to put in there. It’s fairly straightforward to do an example and to read me as well.

Wow.

Yeah. Actually, I’m blown away. I’m just looking at this and it didn’t occur to me that I can use this for other things. So I know that you’ve just explained it to me, but this hook wrapper thing, I can use this report to in a different instance, I’m testing a device that has its own log files, and I could potentially then at the use of hook to say when something fails, I can do some extra work, like, for instance, go out and grab the log files from the device and then include yours.

That is so awesome. I’m going to have to try to get this into our system. This is great, and I didn’t realize that you can use this in conjunction with a continuous integration, so that’s really great and just use it as an extra thing.

You said you were using Jenkins, so we use Jenkins as well. There’s a plug in for Jenkins called Publish HTML Reports and it works really well with this.

It basically gives you a link on the test and code next to a build for the HTML report and it will just show you in the browser. One limitation is that since I think it must be about a year or so now, there was a security issue in Jenkins, which meant that they enforced CSP, which is content security policy on all files served via Jenkins. And so what that means is you don’t get the JavaScript executing in the report.

And it also meant for a while the CSS was broken as well because it wouldn’t allow CSS if it was in line in the page. And so this is what Anna, my intent this summer was working on making sure these reports look good when CSP is enabled, but they’ll function better and look better when they’re not restricted by that.

So it does work even if the CSP is on.

Yes.

Okay.

Yes, it works and it looks pretty good, but it does mean that there’s a couple of features in there. For example, at the top there is ability to filter by result, so you can, for example, just Uncheck the past results and they will disappear from the report. So you can focus on what you’re more concerned about, which is most likely the failures. There’s also the ability to expand and collapse the test details as well, but those features don’t work if you’re looking at the page provided by a server that has CSP enabled, such as Jenkins.

Okay, which version of Python are you using for your tests?

Most of Mozilla’s tests are using Python 2.7 for all of my plugins and any package that I’m writing in Python. I am trying to be Python Three first and make sure they still support Python Two Seven. But primarily I target Python Three. And so something that is going to be part of my mission next year is to make sure all of missiles projects and test suites work in Python Three.

And I’m actually working on getting the Selenium Python tests running in Travis against Python Three as well. I’m hoping to land that in the next couple of days.

Okay, and have you guys upgraded to the Python three?

Yes, so we did that as soon as possible, and we’d like to try and stay on the latest stable.

One of the very fun things that us pipest nerds like is the change in how the yield keyword works in inside fixtures. Now, some of the fixtures that you’ve put in place for Selenium and HTML are predate this, but are you utilizing this in any of your testing at Mozilla or at Selenium?

So the Selenium suite uses the yield keyword in pictures? I’m a big fan of it. I think it makes the code much more pleasant to read. I’m a big fan of readable code, and maybe that’s why I’m such a fan of Python. But no, those plugins don’t currently use. I think there might be a yield fixture as in a yield underscore fixture in pytestodenium now, but the reason is I like to support the latest and previous major versions of pytest for the plugins that make sense. So I’m looking forward to when PYT 3.1 or indeed maybe even 4.0 come out and I’ll be able to update all of those fixtures to use the yield keyword.

Anybody that didn’t listen to the last episode where I talked with Raphael? The yield keyword is when you set up a fixture in Pi test.

The code in the fixture runs before the test until the yield keyword is hit, and then after the yield keyword, that code is held off, and that happens after the test. Did I get that right?

Yeah. You don’t have to create a Finalizer function and then add that to the request. It’s the same thing, I guess, but it’s a little more pleasant.

Have you used Unit test or other frameworks before Pi test?

Only very briefly.

Okay.

I joined Mozilla. They were using some Python in their testing and it was mostly unit test. We very quickly went through and then banned it on pytest.

Fortunately, we covered Selenium.

The Selenium.

pytest covered the HTML reports talks. So Talks. I have only played with Talks a little bit, Talks as a way to run the same set of tests against multiple configurations or being that like different Python versions. Is that what you’re using it for?

Yeah, primarily. So my environments typically for one of my plugins would be making sure that it works on Python 2.7 and then 3.33 .4 and 3.5, and then I typically have additional environments for generating the docs if it’s not just to read me and then doing Flake eight Python Linting and in some instances a coverage report as well. So Docs can take care of all of those. And it just makes it nice and easy. Like you just run Talks on the command line, assuming that you can find all of those versions of Python. If you’ve got an installed, then I’ll just run all the tests for you.

Do you also use Talks for launching from Jenkins?

We do run the tests through Jenkins using Talks as well, so it’s very powerful in Travis because you can specify multiple Python versions. And then there is a package called Talks Travis which will match the installed version of Python to an environment in your Tox configuration and run that without additional configuration in Jenkins. What we tend to do is we pick one of those environments. So we’re only really running against Python 2.7 in our Jenkins at this point.

You went to that pytest Sprint, what did you do there?

Yeah, my intention was to do a few enhancements some of my plugins and take advantage of the knowledge of the Pi test core developers that were present. What I ended up doing that I did do a few of those things. I actually moved a lot of the plugins that I developed into the pytest Dev organization, so I still am the owner maintainer of those plugins, but it also means that there’s more visibility from the core test developers on those, and anybody could do a release if they needed to. But the other thing, I got involved in what was happening at the Sprint, so I landed one patch with Oliver Best Water and we dropped the dot. Controversially, maybe?

Yeah.

So that was my big contribution during the Sprint.

My laptop has two stickers, one of them with the dot and one without the dot.

Yes, I think there are some people still supporting the dot, so that was my main contribution.

I worked with Danielle on a chain, a fix that was bothering me personally, which was about root directory discovery in pytest, and it was getting confused. When you happen to have a value for a command line option that looked a little bit like a file path and it was considering that as a potential test location and throwing off your route directory discovery. So that was fun to really get into the whole path discovery part of Pi test. It’s a bit of a jungle cool.

We’re running out of time a little bit. But the last thing I wanted to talk about was a thing that you just posted up like yesterday or this recently, which was the Help Wanted issues.

This is pretty cool, I think. So this is on GitHub. I’ll put a link to this as well. But you’ve marked a whole bunch of the repositories that you work with with Help Wanted.

Yes.

In GitHub, one of the default labels is Help Wanted. And so for a little while. So at Mozilla, we encourage community contribution. We have volunteers that come and help us work on anything that we do at Mozilla. And so this isn’t a particularly new thing, except I was able to go into GitHub and work out the search terms necessary for basically finding issues that I’ve authored with this label that are open and not assigned. And yeah, I think there’s half a dozen in there at the moment, but that link will always show any issues that match that. And so these are issues that I have opened and would be happy to help mentor somebody to contribute. There are things that I possibly would pick up myself sooner or later, but they aren’t necessarily blocking me.

I think that’s a cool idea. And I think the little extra thing that you said there, which is great, is the items that you wouldn’t mind somebody else taking on and also that you would be willing to mentor if somebody was confused and wanted help with doing it.

I didn’t even know this was part of GitHub, and it’s pretty cool. And then also, one of the great things about having you put this up here is you figured out the magic search string so that other people can do the same thing on their projects and put a link to that for theirs as well. That’s pretty neat.

Absolutely. Okay.

And then I’ve just recently started working with Michael Kennedy of Talk Python to Me Fame.

Oh, yes.

We’ve started a podcast together for news items. It’s called Python Bites.

But he always asks people at the end of his podcast like, what editor do you use? So I’m going to do a nod to him and ask you, what editor do you use?

Currently, I use Atom.

Okay.

I used to use Sublime and switched over to Atom. Does everything that I need.

Does it do VI bindings?

There are packages that you can install. I would be very surprised if there isn’t package for VI bindings.

It’s not something I like to talk to myself with.

Yeah. Okay. So I’ve been using Vim for so many decades that I can’t use any editor that doesn’t support VI key presses.

I even have trouble using Word anymore. Like just writing anything, but I think we’ve covered quite a bit. I want to thank you so much for coming on the show. Thanks for inviting me and we’ll talk to you later.