M. Scott Ford is the founder and chief code whisperer at Corgibytes, a company focused on helping other companies with legacy code. We talk about the company, about testing strategy, getting a handle on technical debt and process debt, iterative development, and a lot more.


Transcript for episode 30 of the Test & Code Podcast

This transcript starts as an auto generated transcript.
PRs welcome if you want to help fix any errors.


Hello and welcome to Test Encode. On today’s episode, I’ve got an interview with M. Scott Ford of Corgi Bytes. We talk about legacy code, of course, but we also talk about how he got started with this company and quite a few other topics like technical debt, process debt, software testing and readable code, and a lot more. I think it’s a great interview and I think it’s one that you’ll get a lot of information out, so I hope you enjoy it. Thank you to Nerdlettering.com for sponsoring the show. Nerd Lettering is cool. Python swag that’s way cheaper than traveling to a conference to get some swag. I really like the T shirts and hoodies that they have up now, as well as the mugs and mouse pads that they’ve always had. I’ve already got a mug and mouse pad and now I’ve got my eye on this Pythonista hoodie that looks really cool. Go to Nerd Lettering.com and check out the great products and show the world how much you love Python. While you’re there, check out the Our Story link to see some hand drawings of how some of these ideas get brought to life. It’s really an interesting read. Don’t forget to use the discount code Test Code. It’ll save you 15% and it will also let Nerd Lettering know that you got there from here. So thank you to Nerd Lettering. I also need to thank Patreon supporters of the 5000 downloads each episode gets. There are about 35 of you out there that have contributed to the Patreon campaign, and that seems like a small number of people, but I really, really appreciate it. If even just a handful more of you contributed a buck a month, it would help out quite a bit. So thank you to Patreon supporters. And if you want to join these awesome supporters, go to Patreon.com Slash Test podcast. Or go to testandcode.com and click on the Donate button. Now on to Legacy Code.

Welcome to Test and Code, a podcast about software development and software testing.

This is Testing Code, and I’ve got Scott from Corgi Bites on and I am excited to have you on.

Cool.

Thanks.

So since I’m not going to be able to do an introduction for you, who are you and what do you do?

I’m the Cofounder and chief Code Whisperer at Curgywights.

We are a software development company that focuses exclusively on working with legacy code.

We’ve been accumulating a team of people who absolutely love it, and that’s our passion. That’s what we like to do. We like to make other applications better.

That’s fascinating to me. Before you got into Corgi Bytes, how did this happen?

How did you get to start doing a consulting company around legacy?

It’s quite a long journey.

I job top a lot over my career.

My first job doing any kind of like, significant software development was on the X 31 experimental fighter, and I was working on the testing tools to help determine whether or not the aircraft was safe to fly.

And the project that we had to do on that was to try to see if we could build a drop in replacement for the testing tool because it would only run on, like, 286 computers.

And the Navy was having a lot of trouble keeping 286 computers around at the time. This was around 2001, 2002, and it was quite a challenge when you could walk out to Best Buy and get a really decent computer that was just pale in comparison to it to an old 286. So we were kind of a spike project to try to just as an experiment to see is it even possible to build a drop in replacement without disrupting any of the rest of the team. So we had to reverse engineer how that application worked and then be able to make sure that we build a replacement that could run the same test scripts.

Okay.

And go through all the challenges of figuring that out. I kind of describe it as, like, cucumber, but for an aircraft, very much functional testing in terms of, like, what’s supposed to happen in different scenarios. And those tests were authored by someone with an engineering background and not a software background.

Yeah.

So I think things got started from there where I had a lot of fun on that project. And after that project hit its natural life cycle, I kept trying to recreate. I kept trying to find other places where I could have that much fun. And I noticed that I would have a lot of fun whenever I started somewhere new. And then about six to eight months, the honeymoon was kind of over, and I wasn’t having fun anymore. And so after about tenish years, I decided to like, well, maybe the secret is to start my own company that perhaps I’m unemployable after trying a whole bunch of other stuff.

But then the first couple of clients, we had the same thing happen. So my business partner, Andrea, she really sat down with me and helped me do some analysis. Like, every single project that I liked working on. And what was it that they all had in common? And out from that popped out this idea that what I enjoyed doing was fixing bugs and cleaning up technical debt on every single one of those projects. And it was during those first six months that I was allowed to do that without being told to not do it, because it was very much in the like, oh, go clean stuff up while you’re learning.

Go fix bugs and clean up technical debt as a way, as an exercise to learn the system.

But then I was told to stop doing that when it was like, it was still fun.

Yeah. So now you can get to just hop around to new companies as a consultant.

Yes.

How many people are working on cargo projects our staff is right now at about twelve.

Okay.

Pretty much seven developers and the support staff.

And you still get to get your hands dirty with code or are you busy being a manager?

I still get my hands busy with code. There is more management and kind of helping run the business that creeps in, but I try to stay like about half my time is deep in the code.

We start off our projects doing the technical discovery, and I’m the one who spearheads the technical discovery projects.

What is the technical discovery?

We’ve drawn a lot of inspiration from remodeling buildings, so we call our approach to software development, software remodeling.

Okay.

And so our technical discovery is what we call a code inspection, which might be similar to a house inspection if you’re buying a new house. Just a way to kind of assess what’s there without necessarily without any judgment and kind of establish a baseline for if we make any improvements.

This is what we’ll be measuring ourselves against.

Does that involve talking with the engineering team that’s going to end up maintaining it as well?

Yeah.

If we have access to the team that built it, which sometimes we don’t, then interviews are great.

We try to keep them relatively short about like an hour or so. We don’t eat up anybody’s time a whole lot and kind of be respectful of timelines that people are under.

Okay.

One of the main things that we’re looking at is how well documented is the system in terms of and how maintainable is the system. So as a legacy project, if it had to change hands abruptly from one team to another or from one developer to another, how detrimental would that be? And so that’s one of the things we look at. So not having a whole lot of interaction, I think, helps that we do ask for where people think the trouble spots are, where they think we should be looking, and also kind of getting a heads up about if there’s a particular area of the code base that’s on the roadmap to just get completely cut, and then we won’t spend a whole lot of time focusing on that piece.

Okay.

After you guys leave, are you the people maintaining this code, or do you leave it in a state that it could be maintained by somebody in the company?

We’ve done both where we’ve been kind of maintaining these apps long term. So we’ll do an inspection, and then from that inspection, we identify some things that we think would benefit in terms of paying down technical debt, ways of the application being improved, whether it’s making the test suite better, improving practices around deployment.

There are a lot of different things we look at that go into making the development team more efficient.

The metric that we’re looking at.

I have this attitude that it’s technical debt that slows your team down.

That’s kind of where you observe the issue first is that your team is not as productive as they once thought they were, and that’s an indication that there’s some kind of debt, whether it’s processed debt or technical debt.

So taking the time to really look at those and Hone in and help improve those. And sometimes we’re the ones who take that on entirely because the previous team isn’t around anymore. And so we’ll take a project from scratch, we’ll take the project all by ourselves and start doing the things that we suggest need to be done, and then that will get handed back over to another team. We have one client right now who it was an application that was built by an offshore team, and the relationship between the business and the team that builds the software broke down. So we took over development, and we’ve been working with them for about a year, and now we’ve gotten the application to stable enough where it actually makes more sense for them to start hiring an internal development team to work on it. So we kind of like we’ve addressed the issues that they had that were coming out of the offshore team in terms of stability and the features that they wanted.

So we’ve been able to turn that around and improve a lot of the metrics, and now they’re getting ready to hire their own staff, and so we’ll be transitioning back over to them.

Okay. And what languages are you primarily working with, or is it across the board?

Yeah. So we take the approach of any language and platform, any framework. We have this idea that the challenges for legacy code are there, regardless of the language and the frameworks you’re using.

And that expertise in the language is always important. And we don’t necessarily need someone who’s only ever worked in that stack. So we’ve got people on our staff who have a lot of Ruby experience, a lot of Python experience, a lot of Java experience, a lot of Net experience.

And our projects are across the board as well. Up until probably, I would say, like the first maybe five or six clients we had were more Ruby on Rails than anything else.

And then since then, it’s been diversifying quite a bit. We’ve got more Net clients.

We actually have Adelphi seven application that we’re working on for Windows Desktop.

We’re migrating that to the latest version of Delphi, and then we’ve got a Ruby on Girls project.

Interesting. I don’t usually think of Ruby as a legacy, but that’s how you’d be like working in Fortran or Lisp or something.

Yeah.

You can quickly get an application into a legacy state.

And what we consider to be a legacy is that there’s no communication artifacts that are documenting the ideas for why you’ve built what you’ve built.

Probably one of the best indication artifacts that we think is a test.

So like an automated test that describes how what you’ve built should work. But there are other artifacts of thought, too. So decent commit messages in your source code repository. We’ve taken on projects that haven’t had source code repositories, and so we haven’t had any kind of history for why certain decisions were made or how the application has grown. It’s one of the things we look at as part of our inspection is we look at the history in the Source control repository to try to help us determine what files are changing more than others and which ones might need more attention than others.

I like that you focused on the question of why. Because I’ve found that why something is the way it is.

That’s often the thing that you can’t get by just looking at the code and running it and figuring it out, you can’t figure out why.

Right. The code will always show you what the code is, the perfect documentation of what.

If you find yourself needing a better documentation of what, then you need to clean up the code. Right?

It’s that simple, but the why, the answers to the whys those are always harder.

You brought up testing a couple of times, I hope, thinking that testing is a big part of these projects and that you also want to get this done as fast as possible.

I mean, maybe not as fast as possible, but not wasting time. So do you have a Pragmatic approach to testing or what is your view on levels of testing and how do you write those?

Yeah. So we like to kind of use tests for safety, so just try to make sure that we’re not breaking anything that we’re touching and that the things that we have built work the way we intend them to. That doesn’t mean that we need a 100% exhaustive test suite from the start.

A lot of times it may just mean that we want one smoke test to help give us a safety net for the area of the application we’re touching. We also don’t advocate to our clients that if they don’t have a test suite, that they stop all feature development and build one from scratch like that. I’d consider that to be a poor investment.

Instead, basically make a decision that if you have an app that doesn’t have tests, make a decision to that you’re going to start adding them with everything you checked in.

And so kind of start taking the attitude of anything that you change is going to get covered by tests.

And if there are parts of the application that never change, then those may never be covered, and that’s okay.

And then are these test individual component tests or functional system tests?

What levels of test are you applying?

I have a blog article that I wrote called like Pyramid of Testing. We kind of like to take the approach of you should have a lot more unit tests than you have integration tests than you have acceptance tests. But you do want some of all three kinds. You don’t want to have a system that’s only integration test that’s going to be a relatively slow test suite, and it might be incredibly thorough.

But you’re testing a lot of functionality that you don’t have control over, and you don’t necessarily need to be testing.

So looking at focusing at the unit level for the majority of your testing and making sure that you are testing the interconnections between the components you’re building and that you also have a few tests that are full stack, but you don’t need like an exhaustive full stack test suite, for example.

Okay, interesting.

So kind of as you kind of go up the pyramid from the unit test in terms of complexity, you should have fewer and fewer.

And I think there’s some metrics that you can use for that, too, for evaluating that. So I would think the code coverage for the unit tests, getting that to 100% is a valuable driver.

But 100% good coverage for the acceptance tests, that’s a waste of money.

Now I’m just going to let it be because I disagree with this.

Okay.

But I’m not working on the projects you’re working on.

One of the things that I found was that a lot of the unit tests tend to be change detectors.

They’re things that you always have to change if you’re going to change the code at any level. I’m testing at I want to make sure that it’s a behavior test and not a design test.

I think of writing software similar to Writing Pros.

It’s going to be edited, and I’m going to get it wrong the first time.

But I do want to understand what the API is and what the expected behavior for a chunk of code is. But the individual implementation of how I accomplished that, I might change my mind several times and having to change the tests every time I have to change both the test and the code. There’s a risk of me just changing the test so that it passes instead of changing it because I decided to change the behavior.

And that’s my fear of a unit test heavy. The other fear I have is with promoting the test pyramid is that since the top of the pyramid so small that mostly I think people just say I’ll just write unit test and code about all system level tests.

Yes, that would be a mistake. Like, say, forget about all system level tests would be a mistake. But at the same time, I’ve seen applications that only have acceptance tests, for example, and it’s like a four hour test suite.

Now you have that influencing developer behavior where now developers aren’t writing tests and they’re not running the test suite because it’s so painful. So you’re really losing the value at the team level from having the test suite to begin with.

So it’s kind of like fighting that psychology issue as well. I can certainly see of some the points you’re making. Those are very valid.

But you haven’t found that an extensive unit test suite ever gets in the way of changing a design?

Oh, it certainly does.

I guess. I see. The purpose of the unit test is not to inasmuch as it’s testing design. It’s testing the correctness of the design as it’s intended.

It’s more the behavior tests that are making sure that you have an altered behavior if you change the design. But yeah, I would expect that unit tests would have to be changed drastically. And if you change the underlying code drastically. So if you go to an entirely new design, you’re probably going to have to check a whole bunch of unit tests. And I wouldn’t necessarily stress I don’t try to stress about keeping unit tests around that aren’t really serving me anymore. I see that as like just kind of like any other blocks of code.

If it’s not the design that I want anymore, then it’s going to get chucked. And the tests for the old design don’t make sense anymore.

And then for higher level tests, do you guys use like cucumber, like tests, the English descriptions?

Yeah. We do this from time to time, especially for the clients that we like the most for scenarios where the client is involved in helping edit and write those.

That’s why I see that the biggest benefit is when you actually using that as a communication tool with someone who doesn’t have as much technical experience.

I think if you have the only people who are looking at the test suite are ones with a very high amount of technical experience, then custom of the value of having the English test suite starts to fade away. I think you get more of that value when you’re using it as a tool to communicate with the people who want the system to work a certain way to help them be confident that it’s working that way.

Okay. So if you had like a programmer audience in the first place, you would probably use just some other test framework.

Yeah.

Okay. Now you also mentioned that sometimes processes need to change.

What do you mean by that?

Yeah. So like, one would be using source control.

That’s a process change, right? Sure.

And there are projects out there that we’ve taken on that haven’t had source control or they’ve been using source control, but all of their commits are along the lines of Fu bar style messages. And there’s not really a lot of information as to why things changed or you have a lot of things changing at the same time, and it’s unclear why. So kind of encouraging the culture of starting to document why in your commit messages, in addition to what having an issue tracking system where you’re keeping track of what your shoes are, that’s a form of documentation, right? That’s a form of why your systems need to change. Issues were identified. There was a discussion on them.

The issue was resolved somehow, whether it was decided not to fix it or it was a glitch or something that was wrong with the system itself and needed a source code change to address that’s. A form of documentation for keeping track of, like why your systems need to change. We worked on teams that haven’t had those, haven’t been using those either.

And then other things that we really encourage is having an easy and repeatable deployment process.

We also find that a lot of teams, a lot of the behavior that they adopt is based on how hard it is to deploy their app.

And there are teams that we’ve worked with who deployment for them is a very manual process that takes several hours and backing that out if something goes wrong or something is detected, takes just as long.

So there’s a lot of stress leading up to the deployment event and making sure that things are exactly right.

So one of the things we really encourage is to try to get the deployment process to be as fast and as repeatable as possible so that if you do have to back it out, you can back it out very quickly, and then you can start kind of gaining some confidence by deploying more often.

Are you frequently are always working with Web applications?

Frequently? I would say most are Web applications. We do have one Windows desktop app that we’re working with, and you kind of see deployment as building the installer package.

That’s the act of for the purposes of the development team anyway.

That’s their deployable asset.

Okay.

And that’s the thing that someone could use to then install.

Do you recommend like a staging server then or.

Yeah.

So we like to see some kind of staging environment that’s used to communicate with people who are doing manual testing.

There’s certainly value in having humans do manual testing to the extent that humans are really good at thinking of things, of ways to break things.

So having a staging environment that creates that space to do some exploratory testing is always really valuable.

Now, do you utilize who are they doing the manual test for, you guys or for the code you’re working on? Somebody within the company.

Or most often it’s someone who has experience with how it’s supposed to work after the change that’s been made.

That can be someone on our side, but we’re more comfortable if it’s someone who’s inside the company and has really intimate knowledge of what acceptance is for or would know to try things that we haven’t thought to try because the things we’ve thought of, we’ve likely written tests. We’ve probably be written automated tests for it’s. The things we haven’t thought of that it’s really valuable to have somebody go in and do a little manual testing since you probably work with companies.

Definitely more companies than I work with, since I’m only working at one company. What is the percentage of people that utilize the dedicated QA team versus people that have just their engineering staff to do the testing.

I don’t know that the companies that we’re looking at are necessary representative as the industry as a whole.

Most of the companies that we’ve seen don’t have a dedicated QA QA staff or dedicated QA resources. That’s something that the team that’s responsible for the application itself, which is almost always a mixture of software developers and people with business expertise.

That collective team ends up being responsible for the correctness of the system.

Well, I’m seeing that more and more, too.

The reason why I bring it up is that the documentation for a lot of Agile processes like Scrum and XP is still being done in places. There is an assumption there that there’s a QA team that deals with it after the fact.

I guess since I brought it up with the Agile and Scrum and stuff, do you utilize any Iterative processes and recommend any?

Yeah, we advocate really strongly for Iterative processes, and in fact, we like to kind of focus on looking at all the cycles that you have in whatever process you’re using and trying to shorten those to get the most benefit as you can from when a decision is made to when you can observe an effect from it so that you can then make a correction if needed. And then you continue to loop.

Even the waterfall processes they have that built in where there’s a decision made and then you’re able to see the outcome. You’re able to see an outcome from that decision.

I feel like the waterfall processes have a much longer time from initial decision to concrete outcome.

If the concrete outcome is determined to not be what’s desired, then you do go through the process again.

So even that is a cycle of sorts.

One of the things with the waterfall and other what we think of as traditional development is a notion that you’re coming up with, I guess, requirements and functional specifications before starting software are the projects you’re working on. Do you have like a preset requirements before you get there, or do you have to discover these on your own?

Yeah, it’s almost all discovering as we go. Okay. And it’s discovering by learning what’s desired and then working up a potential response to whether it’s a bug or a feature. I kind of classify them all together.

Regardless of whether it’s a bug or a feature, the system that’s in place is doing something, it is not doing what’s needed right. In the case of a feature, the system needs to do something new that it hasn’t done before. In the case of a bug, it needs to stop doing something that it is doing right, like it’s doing something incorrectly. Okay. So those are both examples of it not working the way that the user needs it to learning what that is at a very kind of concrete level for that one issue and then fixing it. And that’s a different approach than you would take from building an app from scratch right from ground up.

Because again, we focus on systems that already exist and we’re adding additions, we’re fixing bugs, we’re cleaning up technical debt, we’re helping those systems kind of pivot towards a future state.

Are you taking out features? Sometimes, yeah. Okay.

Yeah. I think that’s something that as an industry we don’t do enough of. I think deleting features is something we could really do a lot better job at. I think also having system and tooling support to help us detect when features are not used anymore.

There’s a lot of teams that spend a lot of time maintaining features, and they have no idea if those features are even used.

And if they’re not being used or they’re being severely underutilized.

That could really help inform the business as a whole, whether or not it’s worth investing in the engineering resources into something that so few people are using.

Now I cut you off when you’re talking about iterative processes and stuff. What sort of processes do you utilize? Do you use Scrum or what?

If the team that we’re working with has an established process, we work within that, and if the team is working with doesn’t have one or there isn’t a team, then we kind of take over and establish our own. We tend to lean towards very Kanban style of development.

I think that that’s a really good fit for existing projects that you need more, because what you’re working on on existing projects is especially legacy projects that are kind of in this legacy state where they need help.

It looks a lot more similar to a support cycle than it does an engineering cycle, if that makes any sense.

So you have individual specific critiques that you’re reacting to and responding to, and moving those critiques through a process and addressing those one at a time then becomes the goal and kind of prioritizing those one at a time. Like, which one of these are we going to work on first?

I guess I’m thinking about it support staff. They can’t plan their week ahead of time or do Sprint as easily if it’s mostly dealing with tickets from users.

That’s kind of like how we look at it is it could be a ticket from the business owner. It could be a ticket from the product owner. Right. But it’s something that’s come up relatively recently and prioritizing that and kind of triaging that against other things that also need to be addressed and then fixing those in priority order. So it looks a lot more like almost like medical triage in that sense.

You could have something that comes in today that is deemed to be way more important than anything else that’s in progress.

And what might make the most business sense is to work as cleanly as possible on the things you have open so that you can start on this thing that’s really critical.

Now, the Kanban style, you’re having items that you’re going to work on and prioritize, but then they go through what like holding bucket thing and then in process and done.

Yeah. So you have some kind of like you almost always have those kind of three basic steps where you have it’s ready to be worked on, it’s being worked on and it’s completed. And you can have different phases of that. Like, you can have different aspects of what needs to be done.

So you can have a really complex workflow that a task moves through, including like, it’s ready for it to be manually tested. Right. And then it’s being manually tested. It was manually tested as an example or it’s being investigated.

It’s like ready for investigation. It’s being investigated.

We verified that it’s an issue. So now we’re investigating how to fix it and kind of moving through a cycle that way. But you’re doing at an issue by issue level. You’re not doing it at like you’re not moving the whole software system through those gates one at a time.

So do different issues go through different stages then? I guess, yeah.

So depending on different issues, could and again, it’s entirely up to the team to decide what kind of workflow they want to build for whatever they’re working on. And then one of the things you usually focus on is trying to try to minimize the number of things that are in progress at any one time.

Okay.

Because the basic idea is that if you have kind of based on the idea of an assembly line. So if you have like an assembly line with, say, 1000 steps and if you take the assumption that once something goes into the assembly line, it has no value until it’s done or it has little value until it’s done, it can’t be really used until it finishes the assembly line.

Then if you have 1000 things that are in that assembly line and that are in that state and you learn that you need to stop, you now have the entire content, they simply line is not waste. Okay.

Right.

So it’s kind of like trying to minimize how much waste there is in the system in terms of work that’s half done.

Yeah. And then so if you find yourself having too much work in progress, how do you deal with that? Do you double people up on shift people around?

Yeah. So I think that is then the idea is you adjust staffing to help you move tasks through your pipeline faster or don’t start anything new. Yeah. And so, like, there are some teams that maintain really strict limits.

There are other teams where they use whether or not they’re exceeding thresholds as a health status for the team.

If they find that they have too much work in progress, that might be an indication that they need to hire, they need to staff up.

Or it could mean that there are other issues with the system that might need to be more systemically taken a peek at.

And I don’t really want to get into like, competing tools, but do you have a particular favorite tool you use for tracking this?

Almost all of our projects use GitHub for source control. Okay. And there’s a tool called waffle. Io that sits on top of GitHub issues and lets you build Kanban boards to move tasks through.

We really like using that. One of the things we like about it is that because it’s on top of GitHub issues, if you prefer the UI of looking at just a list of issues, then you can still certainly interact that way. And then any update that’s made from that UI is also reflected in the other kind of board style UI. The GitHub team has also added, like, GitHub projects, which has similar functionality where you can create cards that you move through a process.

Okay.

The kind of the differentiator there between GitHub and waffle.

Io.

I know you said you didn’t want to get into tools.

No, that’s fine.

Everything on the board in waffle. Io is an issue in GitHub, whereas with the GitHub projects, that one to one relationship isn’t guaranteed.

You can create things on the board that aren’t in the issues, and I can see that being desirable in either direction. Right. And I think for our purposes, our preferences, really, we would like making sure there’s a one to one correspondence.

Okay.

I think you’ve got an interesting company there, and I think a very pragmatic approach to how you deal with all of the process around it. Anything else you want to talk about that we should talk about with the legacy code or with testing or with process.

So something that we really focus on that I think hasn’t been touched on is that whoever has done the work has done their best.

And that’s always our assumption going into any project.

Okay.

Whoever’s worked on it before we got there, they did the best they could.

They did the best they could with the constraints that they had. Some of those constraints might have been knowledge. Some of them might have an experience, some of them might have been time, some of them might have been stress. Right.

But those were the constraints, and those constraints yielded that result. And so we try not to like, even when we’re doing a good inspection, we’re never like, who are these idiots? That’s not something we would ever say.

I think that’s important to say because we’re arrogant geek, and you look at some code and go, what kind of moron would write this? This is direct, horrible code. Right? But it doesn’t take long on the job to realize that most of the time when you say that it’s your own code that you wrote a month ago or two months ago.

Right. And sometimes it can be a constraint in terms of the system that you’re working with that yields it needing to be written a certain way which is less than optimal in other characteristics. So highly performing code doesn’t usually read well, right?

Yeah.

If high performance is more important than readability, then that’s how your app is going to skew.

I guess one of the things that I’ve seen is premature optimization happens are those issues that you guys deal with as well.

We haven’t been asked to help out with performance specifically yet, but the approach that I like to take, performance is very much a measure, experiment and learn approach where let’s collect measurement on what’s there. And then if we think something different is going to be better, let’s make that change and kind of run an experiment and see is it actually better?

Nice.

And then if it’s not, then take it out. And I think the same thing for whether or not making something more readable would make it worse. Right. If there’s a concern that by making a chunk of code more maintainable or more readable or more easy to understand, that might make it slower, that’s something I would want to do a similar experiment on.

And you let that kind of determine which direction to go.

So readability is very important to you.

Yeah. Readability is very important to me because I think it should be. Across the industry, we spend way more time reading code than we do writing code.

And code is read by humans just as much, if not more so, than it’s read by machines. And it being easy to read and understand by humans, I think is very important.

Does that apply test code as well?

Yeah, I do apply that to test code as well. Yeah. And I’ve seen some Tesco get pretty ugly. And for me, that’s a clean up area, right. You can have a test that’s difficult to maintain, a test that’s difficult to understand as a result.

Are you taking on new clients if there’s somebody that has some legacy code they need you to fix?

Yes. We are actively looking for new clients where as demand grows, we’ve been growing our team.

And also if there are people out there who would love to focus on that kind of work exclusively kind of hitting us up to pass the resume along.

Your website is Corgi bytes, is that correct?

Yes, corgibites.com I will throw that in the show notes. And thanks so much for talking to me today.

Yeah, thanks.

Thanks again to Patreon supporters and nerd lettering for sponsoring this episode.

Show notes are at testandcode.com follow me on Twitter at Brian Hawken or the show either at Test podcast or at test and Code. Thanks a lot.