Financial services have their own unique testing development challenges. But they also have lots of the same challenges as many other software projects. Eric Bergemann from Paragon discusses testing, DevOps, CI, and lots of other great testing and development topics.


Transcript for episode 109 of the Test & Code Podcast

This transcript starts as an auto generated transcript.
PRs welcome if you want to help fix any errors.


00:00:00 Financial services have their own unique testing and development challenges, but they also have lots of the same challenges as many other software development projects. Eric Bergman from Paragon joins me in discussing things like testing DevOps, continuous integration, and lots of other great testing and development topics.

00:00:32 Welcome to Test and Code, a podcast about software development, software testing, and Python today on test and code. I am thrilled to have Eric Bergman. And you are from Paragon, right? Paragon Application Systems.

00:00:51 Yes.

00:00:52 And we’re going to talk about all sorts of stuff, but mostly focused around testing and financial services. Does that sound good?

00:00:58 Sure.

00:00:58 Cool. Well, before we get into it too much, tell me who you are.

00:01:01 Yeah. So I’m Eric Bergen. I’m the director of product development at Paragon. We have four development teams right now, just kind of help to make sure we’re going down the right path and just leave the teams improving how we do things every day and making sure we develop with the right practices and put out some awesome software for our customers in Paragon. Specifically, we develop software for the financial industry. So we test pretty much every endpoint in a transaction flow. You could take the point of sale device through the acquirer to the network, to the issuer, all endpoints and a financial transaction. They need to be tested and they need to be able to test to make sure they can communicate to the other endpoints in the transaction so we can simulate any of those endpoints. And so they can use us to be the other guy to talk to. And it’s important for kind of your final testing acceptance testing to be able to talk to that other endpoint and see that your live app is working the way you intended to. We also have a testing in other areas and ATMs and things like that. But that’s what I work at Paragon. Okay.

00:02:03 A lot of people, I guess it depends on what space you’re in. But when you say end point. So like a web, endpoint is often going to be like a Rest API or something. I’m guessing financial transactions are not Rest APIs or something else. Or are they?

00:02:17 Well, some are, but most are not. Most use a standard called ISO 85 83.

00:02:22 Okay.

00:02:23 It’s a simple ISO standard to keep having changes to those standards. And they all have their own flavor of them too.

00:02:30 Our tools adhere to that standard and send according to whatever spec version they’ve got.

00:02:35 All of the networks have different flavors of that spec, but they’re also restful. And with some of the online transactions, you’ve got this 3D secure standard where you have that pop up. You’ll see sometimes where you have to log in to use your Visa card or Mastercard or Amex card where you have to log in to use it after online transaction. That’s actually going all the way to the issuer. And they’re the ones that are populating that form for you to log in and verify online. Those are using Restful and a lot of Web traffic there. But you also have, obviously the standard ISO 85 83 transactions as well. So there’s some new technologies entering the space that helps it. But by and large, a lot of the hosts are set up to use that older standard.

00:03:21 Okay, well, so one of the things we were going to touch on is kind of the state of testing, especially in the financial services. But I’m imagining like that things have actually I don’t know how long you’ve been in this space, but I imagine things have changed quite a bit in the last ten years even. Or have they not?

00:03:41 In some ways you could say they have, and some ways you could say they haven’t. Ten years ago, smart cards were still being that was a big thing. Smart cars are coming along all over the rest of the world, outside of the United States, really, but eventually caught up a little bit later.

00:03:59 But that was some of the new technologies. It’s just the smart cards, the testing practices, those ISO 85 83 messages, they’ve essentially just incorporated smart card technologies, newer encryption standards, new features here and there. But by and large, a lot of those networks are still sending those same with those same standards. Like I mentioned, 3D Secure and some of those others. But most of that core transaction flow is close to the same. And when it comes to the testing, there have been efforts to go into a more agile, test driven development sort of methodology in their testing. But it’s hard to get those processes to change. And a lot of these institutions, they still have those over the wall QA teams where they do their updates. A lot of these hosts are with their applications installed that drive most of it. And so they’re really configuring those applications and putting in code here and there to get it to run the way they want it to. It’s hard to get TDD going in that development because you’re working with an existing application.

00:05:05 But then it’s also hard to change the processes and how that over the wall step where they’re doing these manual test and code while the development is happening.

00:05:14 Okay, so is Paragon one of those over the wall places, or are you doing the QA for other companies, or are you providing tools for an in house team to be able to validate their systems?

00:05:27 We’re providing tools for the in house teams to be able to test on systems and that and then some.

00:05:35 I mentioned the development practices. Right. But a lot of institutions, even if they have their own internal testing practices, they still need to certify with a network. So if they’re trying to certify, there’s a whole testing process that happens there. So if they need to go through one of the big card networks, they need to be certified to go through it and have transactions flow. So that certification process. They might use a tool like ours to certify do a pre certification with that’s very common. And so they will test with our app. Instead of going to a real system, they will log into our app, test with ours. Once they’ve gone through the full pre Cert flow, then they might transition to a certification step where they still would actually have people manually checking everything, looking at sell spreadsheets or looking at comparing fields and whatnot just to verify that everything lines up right. But you would still have our app involved in not just the development flow, but in a certification flow as well.

00:06:33 Okay.

00:06:37 Thank you, Config Cat, for sponsoring this episode. Config Cat is a feature flag service. It has a central dashboard where you can toggle your feature flags. Visually, you can hide or expose features on your application without redeploying. You can set target rules to allow you to control who has access to new features easily. Use flags in your code with configuration libraries for Python and nine other platforms. Get builds out faster, test in production and do easy rollbacks release new features with less risk, and release more often. With Config Cats, simple API and clear documentation, you’ll have your initial proof of concept up and running in minutes, train new team members in minutes also, and you don’t have to pay extra for team size with the simple UI. Even product managers can use it effectively, whether you are an individual or a team. You can try it out with their forever free plan or get 35% off any paid plan with special code, test and code. All one word release features faster with less risk with ConfigCat. Check them out today at configure.com. You said there’s a lot of momentum behind the legacy systems and the way people do things. So would you like to see a change?

00:07:50 Absolutely.

00:07:52 We’ve been working for a while on getting continuous integration system going and getting them to adopt that.

00:08:00 We have an API open, and it’s been getting used. Sometimes it’s used in more of a poke and prod sort of manner where they find tests, they want to be able to tweak it. They have a tool that integrates with the API, and then they poke and prod using the test scripts that they’ve already written, doing some minor tweaks here and there to see how the system responds. That’s not really what we want to go to long term. What we ideally have is those tests are living, breathing tests that are running every change continuously. So that the author of the test once and then they’re done with it. They don’t have to in the Poke and Prague the system. And so we’ve had some adopt that way. We really have kind of three different customer users that are going with our product. We’ve got the certification model, we’ve got the Poke and Prod model, and then we’ve got the automated API testing model, and we have to really be able to facilitate all three depending upon the type of user, the direction that the company is going.

00:08:56 How about in house yourself since you guys develop this stuff for yourself, right? Yeah.

00:09:01 So back in probably 2006 and we started writing code on this. We had a desktop app that we would install, and it was written in.com like Windows.com.

00:09:15 We started writing a web app that uses some of those components, and we got it all test driven and eventually pulled it out into its own web server and everything. But we’re doing TDD from the get go with that. We started Agile, probably a little bit after that. In 2007, we had gone with Agile and continuous integration. We started doing cruise control and way back early with ThoughtWorks and all that stuff, and we’ve progressed. Now we’ve got hundreds of thousands of tests that run every commit decent coverage, who you talk to. But it’s worked for us. There’s always this sort of opposition by some to do test driven development because I think it’s going to take longer and it’s because they don’t see on the back end how when you’ve designed an application to be testable, how it actually speeds you up eventually. It’s kind of like a train. You get the train going, it keeps moving, it’s hard to stop it. Once it keeps going. You’ve got all the test coverage. It makes it so that your application is flexible, malleable and changeable, and you can do it confidently. And there was a lot of minds to change on it back a long time ago, but now it’s pretty well adopted and pretty well understood through most of the software industry to take that test driven approach to software.

00:10:33 Yeah. One of the things you mentioned starting it out in 2006, and that seems for a lot of people, that seems like ancient, like so long ago, but there’s still a lot of applications that are in use right now that predate that and going or different teams that wanted to go towards more scrum like TDD style or something and incorporating tests from the beginning.

00:11:01 There’s not a lot, to be honest. I don’t think there’s a lot of really good information on how to adopt that Midstream, like how to maintain and keep going with a legacy system and gradually incorporate that stuff.

00:11:15 I’ve had questions like that, even from some of our customers, like, how do we do this? They’re like some had this perspective that we just need to just write tests. We just need to get good coverage, just go full hog, just cover the whole thing with tests, and that can work. The most important thing, priority number one for me is that all new development is covered. And if you’ve got that practice of making sure all new development is covered, you’re going to be good over the long term. And they want to be adventurous and cover more code.

00:11:45 I would always tell people, well, that’s fine. Probably the better approach would be to expand out if you’re changing one region of code. I’m trying to expand out to the regions using it and just build from there. But if you just work only on coverage, you’re not delivering customer value. Right.

00:12:04 You’re paying back debt, you could say, right. You’re paying back your own debt on your testability of your application, but you’re not delivering customer value. So the ideal way to do it is to deliver customer value while making sure every change you make is proven sound and tested and covered.

00:12:23 Well, yeah. The big payback I see in automated tests for development is to be able to use it during development.

00:12:30 Sure.

00:12:31 And so adding tests to legacy parts of the system, there are different reasons to do that at different levels, but you’re not going to get the same. It’s mostly a cost and expense.

00:12:43 I like the idea of targeting new development for coverage. There’s also in different approaches, too, like, for instance, with development, one of the gauges for a good test is whether or not you can change shooting for agility. So I want to be able to change my mind on implementation sometimes. And if my tests don’t allow changes, then they’re mostly just scaffolding trying to make sure that I don’t change anything.

00:13:11 Sure.

00:13:11 And those are hard to measure. If you’re actually putting tests around code that hasn’t changed in years, it’ll be painful when you go try to change that code and like, 97 tests fail or something like that.

00:13:23 Yeah. You can get into some smells. Even with testing, you figure out the designs of tests that really limit you versus enable you.

00:13:31 Yeah. Also, Chucklin, I mean, I’m not saying that’s a bad thing that you have, like a lot of tests. A lot of tests is a good thing. However, I’ve never heard anybody brag about the number of lines of code in their implementation.

00:13:43 Sure.

00:13:44 But they’ll sometimes mention how many tests they have. Sort of an amusing dichotomy.

00:13:50 A larger number of our tests are actually on the data that we deliver. It’s interesting that we deliver test data. Right. So we have a platform that provides a testing platform that they can author their own tests, but a lot of cases we deliver those tests to our customers. And so we’re delivering a lot of data. Right. So we have teams developing software, but we also have teams developing test cases for our software. So we have tests around those test cases. And so a lot of the tests are actually on those test cases to make sure they perform the way that we want them to. So if someone changes some of the core logic somewhere, it could break the test cases and we want to know what it does.

00:14:32 Oh, yeah. Okay. Definitely.

00:14:34 Some of these questions we are talking about data and testing the data. It reminds me some of the discussions around testing in data science probably have a lot of similarities to the financial area.

00:14:49 Sure. Except for with finances.

00:14:51 You definitely know when you got it wrong, or at least there’s a right and wrong.

00:14:54 Yeah.

00:14:55 Hopefully we’re not doing a lot of machine learning on ATM transactions.

00:15:00 Probably not.

00:15:01 There’s machine learning on the fraud space.

00:15:03 Oh, yeah.

00:15:04 So they have systems that they plug into their hosts that can learn, oh, this is probably a fraudulent transaction and they trigger it and go from there. And there’s machine learning more on the marketing side.

00:15:16 One of the things I’ve been learning more, I’ve never really quite understood the term DevOps. And in the email that got sent over about things that you know about, it included some discussion of DevOps. And I’m just really kind of learning about it more. How does DevOps play into what you’re doing? Is it more of your product or with this, or is it something that you help customers with?

00:15:42 It’s kind of both. Right.

00:15:46 Continuous integration, continuous testing. That’s part of DevOps. And it’s part of the agile testing development process, too. And so you can say it’s all part of the same. It’s for both. But DevOps is a simple way to put it as development for the operations of the company. Right. And usually it’s for a software company. There’s a really good book if you want to read it called The Phoenix Project is what it is. The Phoenix Project is the one I’m thinking of. It’s a pretty good book on just treating, seeing your company as this factory. Right. And you’ve got all these steps that happen in the sales flow and the whole process of the company and how it operates and optimizing that. Right. Doing whatever you can to optimize those steps to make it run smoothly and better. And for us at Paragon, we’ve been doing well, I should say a lot of companies, honestly, software companies have been doing things to optimize even before there was this term DevOps, whether it’s optimizing the sales pipeline, the way that you get the customer information and you feed that into getting in, let’s say your software as a service system, and they log into the site and they create an account and they want a new site and you spin up something, some container for them.

00:17:00 Right.

00:17:00 And they have their own workspace. That is all development driven operations, really. The sales everything is all development driven. And so you could say that was deployed through technological means. And so you take that and you compare it to some of the old fashioned companies that might have done a lot of that stuff by hand, taking more of a hands on approach with people being involved in an optimize their processes. So DevOps is really taking all of those processes that your company does and trying to optimize it. That’s how many see it.

00:17:32 It’s interesting you bring that up. I just finished listening to the Phoenix project because I saw the Unicorn project come out and I hadn’t read or listened to the Phoenix project yet. So just finished that when I started. I mean, I was listening to it just as because I do the podcast because I work in different spaces. I want to understand what other people are having to deal with. And I also never really kind of got what DevOps was or what that word meant. But yeah, the Phoenix project was amazing. And I realized that I think a lot of what I’ve been doing for the last 20 years is trying to incorporate a lot of that same information into software. I never really associated with DevOps. My introduction to it was through like Lean and Six Sigma and things like that. And these are other ways that people have taken manufacturing processes and the improvements there and tried to apply it to software. I don’t know, two thirds of the way through the book, I’m realizing, oh yeah, we have a whole bunch of stuff that we could do better if we think about a lot of these same optimizations, not only optimizations, but also just parts of the process that are manual that mistakes can happen, or where we have unnecessary queues for people waiting on things that we could just automate it so people don’t have to wait, things like that.

00:18:53 And sometimes it’s hard to just diagnose it. Right. Sometimes you need to put in operations and updates to just figure out where your bottlenecks are. Sometimes you don’t know, and you need ways to analyze it and know you’re trying to make a decision to improve how you’re doing things, but sometimes you can’t measure it yet.

00:19:10 Yeah. One of the interesting bits, I think, is that when DevOps, maybe DevOps was a little layered to the game, but when Lean came through, it was mostly big corporations trying to push this sort of stuff and try to even I remember with early Six Sigma training talking about thinking about the dollar value.

00:19:35 It’s not worth doing an optimization if it’s not worth a million dollars or you have to figure that out.

00:19:41 Well, that’s where lien comes in. You got to be able to measure it, right?

00:19:44 Yeah, there’s the measurement part. But to me I was like, man, I could apply this to the small scale and even at the team scale. And if there’s something I’m doing, actually, there’s quite a few manual procedures in software, even in producing software and publishing and all sorts of stuff that starting to pay attention to how many manual procedures really exist and how many of them.

00:20:13 There’s still a lot of manual procedures, even just with publishing in a piece of open source code, because you got to update the version, you got to test everything, you got to publish it to certain places and make sure your documents are up to date. And if you’re doing it right, there’s just a whole bunch of stuff. And a lot of it is automated. But you still manually kick off these automated procedures and coordinating all that stuff is a fascinating decision just to sort of have that mindset of always thinking about it, always thinking about the processes, because I don’t think anybody’s ever going to get done. We’re never going to get done to the point where we’re like, oh, wait, it’s so smooth. We just have to think and type stuff, and then it happens.

00:20:57 That’s interesting.

00:20:58 Yeah. I guess that would be machine learning at that point. Right. Ai going.

00:21:01 And what are the big challenges that you see around trying to encourage some of your customers to take on a more iterative model?

00:21:14 Is it a big change? Is the roadblock the separation of QA and development, or is it something bigger?

00:21:22 It’s hard. I mean, some of the technologies that they use, it’s hard to get the test driven development going. They wouldn’t necessarily if they’re writing code in Cobalt or Towel, they wouldn’t be using our app to build the unit test or anything. But sometimes it’s hard to build that framework around what they’re working with because they are working with systems that are still you’re still developing in that, or you get some of the hosts now developed in C-L-L Java, by and large, those frameworks to do TDD on their new development there. Then when we have automated testing going with the live transactions. Yeah. It’s just hard. You have so many stakeholders who they’re still going to be a manual step at the end of it. Right. I mean, even our certification side, which has a lot of tools at play. Right. Like I mentioned that they use we handle automate the host end point for a lot of the certification pre certification test. They still do the manual certification at the end of it. It’s just giving people that confidence that you can do it, that you can actually automate those things. I know they’re doing it with rocket science.

00:22:36 All of those tests that they do before rocket launch, those are all automated. You can’t have it’s like I guess they’re forced to. Right. Because it’s physically impossible to go in there and test every single device on that rocket before it launches because it’s physically impossible. You have to automate it. Well, you should really see it as impossible to fully test your host. You have to automate it at all. You have to.

00:22:59 I’m surprised there’s any manual procedures left. That’s just like a spot check at best.

00:23:08 It doesn’t give me a lot of confidence in the financial system, actually, if I think that they’re all relying on manual procedures.

00:23:15 Yeah. If things go bad, they have a clearing and settlement whole step there, too, to resolve things, to handle financial disputes and correct any issues there.

00:23:26 We have software to test for that too. So that could go wrong, too. They manually test that?

00:23:33 Yeah. It reminds me of the Calvin and Hobbs joke. Do you ever read Calvin and Hobbs?

00:23:39 Some. Not much.

00:23:40 The kid asks his dad how they get the weight limit on bridges, and the dad says, well, we just drive somebody build the bridge, and they drive heavier and heavier trucks over the bridge, and then when the bridge collapses, they weigh the truck and rebuild the bridge.

00:23:58 Unfortunately, sometimes that’s how we test software.

00:24:01 Well, essentially, we can kind of do that with software, right? We could break the app, at least with performance testing, right. You can break it and then not put that load on it, right?

00:24:10 Yeah, exactly. Do the load. That’s actually part of the software testing.

00:24:16 It’s hard enough to get everybody to get the functional bit down, to have good behavior coverage and all of the endpoints making sure that you’re getting all the error cases tested. And then you also need to think about security.

00:24:31 And you’ve got security testing, you’ve got load testing and timing considerations to make sure, just to make sure. There’s a lot of things I don’t see very often is people saying, okay, well, this part of the system still works, but it happened to have slowed down by 20%. We should probably catch that. Maybe that’s not okay. Things like that.

00:24:53 Interesting.

00:24:53 Do you have people that have had enough experience to help make sure we’re not going down the wrong path is important, but it’s also very important to have people that have developed the right skill set and development. So you need to make sure you have all of it. Right. Some of the employees at Paragon that have probably the most industry knowledge didn’t come from an organization that was already in the financial industry. They just started here as a developer and they learned it, and some came from other companies. It’s really on an individual basis. I try not to have a fixed mindset when it comes to people’s capabilities and what they can do. And if they come in, they’re smart, they’re capable. I want them to see themselves as having no limits, and I really want them to be multifaceted in how they do things. And so I usually get offended when people put limitations on developers and say that they can’t test it’s like part of me just wants to really be defensive of that just because some of the best testers I know are developers, and it really comes down to their quality as a developer. Right.

00:25:56 If you’re a good developer, I’m sorry. You’re a good tester, too.

00:26:01 If you aren’t a good tester as a developer, you’re not a good developer. And there’s a lot of things to learn as a developer and the subject matter for your industry and all those things, you’ve got to train them up. And hopefully you’ve hired a capable employee who can learn all that.

00:26:17 Yeah, it’s amusing to me when I hear people saying that software developer isn’t trained in testing. Well, I don’t know where somebody would get that training. I don’t know where it is of learning how to unless it’s developing software with test yourself, because it’s not like people go off to College to learn how to be a software tester. I don’t know if there’s programs around that I haven’t seen any.

00:26:46 Some of it has to do with the black box perspective. Right. So if you build it yourself, then you’re not going to test it as well because you’re not critical of your own things. Right. Whereas this person who didn’t is going to think outside the box that you’ve confined yourself to. And they’re going to test with things that you haven’t thought of. But I would say that a developer has access to a lot of things to test with. And so if I was to want to hack a system and break it, it certainly would help me to know the code to break it. You know what I mean?

00:27:18 It’s a good example, right. So a lot of companies go to they hire out third party to help with security penetration testing, right?

00:27:26 Yeah.

00:27:26 And they will have you send you their code. And because it helps them do penetration testing, they want to find those vulnerabilities. They want to look at your code because it helps them break it. That being said, it doesn’t mean that you have to be a developer to be a good tester.

00:27:42 There are a lot of facets to testing. Right.

00:27:44 I guess. But I don’t know how you develop an automated test without writing software.

00:27:48 Well, have you seen the behavior driven development sort of concepts?

00:27:52 Yeah, of course. But I’ve never seen I still think of those people as writing software.

00:27:57 You can say that they wouldn’t call them developers.

00:28:00 Yeah. They’re making software. They’re making the system test itself. Right.

00:28:04 Yeah.

00:28:05 Anyway, okay. So do you use cucumber and stuff?

00:28:09 Yeah, we use something called spec flow that drives Gurkin type syntax that tests our test cases. There’s a large majority of our tests, and then we have some that hit our service. There’s a back end API that it will hit, and we even have some Gurkin expressions that hit the view and drive selenium. So instead of coding out the Selenium or doing record stuff, we just type it out as garlic and Cucumber type expressions and it tests the front end view as well. Okay.

00:28:42 That’s neat. I’ve actually originally was surprised that people were getting a lot of use out of that, but there’s a lot of different use models, so that’s good. Anyway, neat. Well, this was a lot of fun. Thanks for coming on and taking the time to talk with me today.

00:28:57 No problem.

00:29:01 Thank you, Eric, for talking with us today. Also, thank you to Patreon supporters for sponsoring the show. Join them by going to test andgo comsupport and thank you Config Cat for sponsoring this episode. The link to their cool feature flag service as well as the discount code is on our show. Notes at testandcode.com 109 that’s all for now. Now go out and test something.