Dev Environments with Bryan Finster

Adam & Bryan Finster discuss the development workflows required for delivering to production daily with fast feedback and high confidence. Don't miss this one.

[00:00:00] Hello and welcome. I'm your host, Adam Hawkins. In each episode I present a small batch with theory and practices behind building a high velocity software organization. Topics include DevOps, lean, software architecture, continuous delivery, and conversations with industry leaders. Now let's begin today's episode.

[00:00:26] Hello again, everybody. Welcome back to small batches today. I am speaking with Brian Fenster about development environments. It seems like there's always more that I can say about development environments and the general approach to the daily work of creating software. I think that the decisions around how code is written.

[00:00:52] The boundaries between other teams they reliance on integrated versus isolated environments. You know, thinking in terms of owning a service, working on a distributed system, all this stuff is so important and relevant to the daily work of anybody building a system with more than one service or a specialty,more than one team or pod, or however will you want to call your organization design, but this is just a hobby horse that I can keep going back to because I think that it's so important to get, right. Because every thing that you're do and software development flows through how you actually build code tested and all that. So today I'm speaking with Brian Fenster about this topic. Brian is, you know, a multitalented guy he's been working as a software engineer since 1996, mostly in the supply chain space. When I recorded this interview, he was leading a DevOps dojo for a fortune 50 enterprise, where he partnered with teams to learn teamwork, develop techniques, and the discipline required to develop her, to deliver to production daily.

[00:02:16] I really like his focus on. Delivering daily and it comes up again. And the conversation was just, just focusing on, well, you know, small bashes, frequent releases, continuous integration, and really trying to just do what we call today as DevOps. So I thought he would be a good guy to talk about development environments and get his perspective on trying to scale the practices of, you know, creating isolated development environments across a much larger organization, a much larger team.

[00:02:48] You know, scaling out the number of services and all of that. And he brought up some stuff. I did. I never, some technology, some tools I never heard before. So on all, it was a great conversation. And if you want to learn more about Brian, you can find the links to his pages on smallbatches.fm, and also go find his talks on YouTube.

[00:03:09] He's spoken at the DevOps enterprise summit. And I know now that he's also involved with. the Better sooner, safer group on LinkedIn, around Jonathan Smart's book. So all around interesting guy and a smart guy too. So what that gave you my conversation with Brian Fenster.

[00:03:32] Brian welcome to small batches. How are you doing today?

[00:03:36] I'm pretty good. Thanks for having me.

[00:03:38] Well, it's my pleasure. I invited you on the show due to your work leading a DevOps dojo's that would be good to add some context listener for the types of exercises you do, and these dojo's and the type of teams that you work with in your dojo.

[00:03:53] I mean, what will you do? And even broadly, just more, all dojo's in general that are done correctly. We're an immersive learning environment where we work with teams and the context of their work to help them learn how to work. better. You know, helps them solve problems, you know, and it could be that they need to build pipelines.

[00:04:12] It could be, they need to know how to test better. In our context, we focus on how do we, how do we get you through to continuous delivery? You know, why can't we go to production today? Let's find out what those problems are and help solve them together. So it's not a, it's not a situation where.

[00:04:29] Directing teams or, you know, beating up bad teams. It's teams come to us for, oh, we pair with those teams and we join those teams and help them out directly. Also more broadly, we help restructure the organization when that's this problem that teams are having.

[00:04:44] Yeah. It's usually those two things are related. Right. So what kind of technical problems do you see sort of on a recurring basis that teams come to you with and say, Hey, we don't know how to solve this, or like, this is blocking our ability to work quickly. What are some of the common things there?

[00:05:05] I was talking to Scott Muscillo and Nike a while back.

[00:05:08] And he said that we should have gone the psychology school. He said, we're trying to have the biggest undocumented That really the problem generally is we, as engineers have been taught how to work effectively as teams. That we see a few very common problems through teams. They don't know how to test very well generally, because they've been brought up in a situation where testing is for testers.

[00:05:24] You can't be trusted at a test. You're a developer. They don't know how to break down work very well because no one's really focused on continuous integration much, which also impacts testing because you can't test 12. Work's not broken down. Well, they don't know how to work well as teams because they've been incentivized to, you know, Hey, we've got 20 stories of sprint.

[00:05:49] There's five people in the team. So here's your four stories. Go deliver those. And that's not how it's supposed to work, that you don't get good teamwork. You don't get a good quality. you don't get happy customers that way, but that's how everyone's taught because it's good for HR. And so these are just really common problems.

[00:06:05] Testing is a core to what we do. You have to live and breathe testing every single moment. It was the top topic I talked about all the time.

[00:06:13] Yeah. Well, I always like to talk to people who harp on testing because I think testing is a really, it's the, like the P zero, all of the work that we do, because if you don't have, you know, high quality automated tests that you won't be able to achieve anything else that predicates on that like continuous delivery, continuous deployment, like let alone like the higher level business objectives, but you know, you'll be stuck in you know, workflows that are centered around maybe manual testing and more engaged and approvals and everything will start to slow down.

[00:06:46] Okay. I'll take it to the next level that you can't do manual testing is an oxymoron because a test is repeatable, right? If it fails, you should be able to repeat the exact same thing and find out why.

[00:06:59] Well, you can't do that manually. It's impossible. People come out. Repeat themselves.

[00:07:03] Yeah, that's

[00:07:04] true. That's interesting. So what I wanted to talk to you about today was a challenge that I've seen many teams encounter. And I think it also leads to a sort of, kind of a pushback or frustration with microservices, which is how do teams and individuals work with microservice, architecture and development.

[00:07:26] So I'm coming at this from the perspective of, let's say that you're working in a small team, it's could be a team and, you know, a whole organization, or even like a single team in a larger organization, right. Where that team starts, they do a monolith, it grows. And then at some point in time, they decide to split off a new thing.

[00:07:45] They make a new service. And, you know, they were used to say, running this monolith on their machine, doing a development environment, maybe they run their tests there, maybe the applications running in a browser, or they like, you know, they hit refresh, they interact the application, they get used this workflow of everything running on their machine.

[00:08:01] But then, and now it depends on service B and then the thing is, okay, well then do I run service B on my machine? Or like, what happens if service B is outside of the maintain outset of my team? Like, where's that coming from? Like, am I responsible for. And like what happens in my experience is that it's a really an untenable assumption or approach from the beginning.

[00:08:23] But as you expand out from like service B and service C and service D even if it was theoretically possible, like for compute requirements or whatever, to run them on the, on your machine, are you even responsible for that? Do you even know enough about all those different components to create this fully integrated environment to You know, develop your system before you get to production. So I'm curious, have you encountered this problem in your dojo's or like if people come to you with this problem, what kind of solutions do you advise?

[00:08:57] Yeah, it's something I run into all the time that people feel like they have to have an entire system put together complete to test any part of the system.

[00:09:06] And, you know, I started challenging them and saying this, you know, very rarely do I run into the systems where there are hard. You know, if you look at, well not to go into details, but if you look at some of the systems I deal with all the time, they'll say, well, we have to test the entire system and move So we have to have RBC put together. It's like, well, what about the upstream systems that are in an entirely different area of the company that you depend on? Why aren't you joining those then what about the downstream system? That are receiving that data so they can do something so we can solve it? and you're not talking to them.

[00:09:38] So you're not really doing it. You're just arbitrarily saying we need this giant thing with giant failure, surface area to all come together and try to test it. So it's really slow and fragile. And then you don't trust the test. So it doesn't matter anyway, no, I run into it all the time.

[00:09:53] And you know what I find that you know, I think you hit the nail on the head. They started building a model and I said, well, now we need another. thing. But they didn't do intentional design, you know, they didn't, they didn't design for what they were trying to do. I mean, architecture is a thing and it's not a thing that happens in ivory towers and happens everyday on our desktops.

[00:10:14] It was not intentional architecture. You wind up into a windup untestable mess. Yeah. This is why you do test driven development because it designs better architect.

[00:10:24] I'm right there with you. You know, it's sort of like, I like your point that it happens daily on the desktop, is that you know, in order for any team to have success in their software delivery process, they have to adopt boundaries in different layers throughout the whole process throughout the different systems.

[00:10:41] You know, you have to create a boundary in your system so that you can you know, test on one side and say, if this side of the boundary does X, I do Y, and be able to say that with confidence, you have to design that into your system, you know, and if you're starting with the monolith and then just sort of, sort of like, it kind of expands like fungaly out to elusive areas without any, you know, specific decision of, Hey, we're going to cut this here I put an API here, you know, do these types of things.

[00:11:09] Then It becomes, it's almost impossible to create the boundary after the fact something you mentioned, in the pre-show was the concept of virtual services. I think I know what you're talking about here. So let me give you how I think about this. So let's say that you have. Service aid depends on surface B.

[00:11:29] And you want to do some, like developing against service a you might create like a, maybe like a fake or some sort of development version of service B that you can point service a two, so you don't actually need to have a full running thing of service B, but that's only possible if you have some sort of like a boundary or integration point where you can sort of easily swap out what happens behind.

[00:11:52] The boundary. Is that kinda what you're getting out of the virtual services?

[00:11:55] Yeah, and actually we have, I I've put together, I have a training internally where I talked about the rainbow, or I guess the, that's not the gradient of stakes that you're going to have to do this, but, you know, I'll get to that.

[00:12:10] I want to back up just a second. I think there's something we skipped over, which is if we're going to take a model out somewhere, we want to break it up into service. we're really mean to use. We need to map out the business capabilities and then we need to focus on the interfaces because that's where mostly boundaries because of miscommunication.

[00:12:31] And I think that when people go to conferences, I hear about microservices and I say, oh, I've got the small cohesive thing that does one thing. We're going to take our model us and break it up into that. We can barely operate your monolith. You know what, break it into Tuesday. As those, how to effectively test replaces is core.

[00:12:51] And the more interfaces you have, the more failure surface, you have, if you don't understand how to effectively test, interfaces. you know, the thing that I tell teams is, you know, we should be doing contractor with development. We should be testing the interface before we test, we develop anything. else. Let's make sure we have a solid contract between those two things.

[00:13:13] And so that comes back to, you know, virtual services, you know, starting out I'll tell teams, look, all you need is a static. of the schema of the other thing, just test that. Right. And if you're providing a service. Then you need to provide, you are telling me if it's rest open API, that's tested that you validated your service matches that open API contract.

[00:13:37] And from then it gets really easy. If you provide an open API column charts, there's tools out there where I can go and take that API, the open API document, and instantly spin up a virtual service. And what a virtual service is is it's, it's more functional than a static. It's a thing that runs, it's got the HTTP address I can go hit and it acts like it returns data to me when I hit requests, but it's recorded repeatable.

[00:14:07] You know, I know where the, the boundary layers lie. I don't have to worry about, you know, the test leaking out into a database or leaking out into another service. I'm having to take that scope into, you know, actually limit the scope of my. test. It's a test iT's a scientific experiment. You know, we have to control our variables.

[00:14:24] We have to know what the boundaries are and control them so that the same every single time. And so, yeah, virtual services are key to that. if you don't understand them, you're trying to break up into microservices. First word, domain-driven design and virtual services. Cause you're going to need them.

[00:14:42] Yeah, one thing I really like about this virtual service approach and I like your term, Brian virtual services.

[00:14:49] I think it's something a little bit different than like just fakes or mocks. Right? Conveys something, something more. One thing I like to do with the virtual services is they're able to also simulate more failure modes or different like states than you would be in a. running version of the real system, right?

[00:15:10] You can do things like, Hey, what if there's no data? What if the service is down? What if it's latency? What if it's thrashing or there's all kinds of states that you can't easily represent if you're using a real surface, but you can create this sort of virtual or emulated

[00:15:25] world,

[00:15:26] you know, We use some mountebank, quite a bit Mountebank is an open-source tool that you can record replay virtual services.

[00:15:33] I mean, you can go and point it, add another service act as a proxy record that response and then have a set of responses you can go in and code in that. If I send you this, you're going to send me this error back, right. So that you can have that controlled failure mode. So you can verify how you're responding to failure.

[00:15:51] And I'm probably jumping all over the place, but there's something else that you hit on earlier when you're talking about what, if you don't even own that? I test myself. That's what I test. Right. You know, I don't test everybody else's stuff. If their stuff's broken, I'm gonna use a virtual copy of theirs.

[00:16:08] I don't have to worry about. their break. And if their stuff's broken in production, I'll just open a ticket on them. so your stuff's broken. I can't fix it. Why would I spend time testing

[00:16:17] it? Yeah. Well, that also comes down to the ownership and boundaries of each individual thing, like in a monolith. I think it's an easier argument to say that like, Hey, in this big blob of code, maybe I can fix it.

[00:16:30] And you kind of get used to thinking like that. But if you had to You know, a service that was like owned that you were dependent on by another team, it becomes less confident maybe another team or organization, but you can still extrapolate out to the fact that like, Hey, you don't try If you're integrating with Stripe, you don't own Stripe's API.

[00:16:46] You don't try to test against, so you just assume that it's behaving correctly and you test your behavior and for the guard to whatever their contract is. But for some reason, I don't know why this is. Maybe you have some insight here. People inside like engineering teams. They don't think like that with regards to things that are maintained by other members of that team.

[00:17:05] You know what I'm saying?

[00:17:06] has something to do with that. I think they're solving the wrong problems. You know, I'm trying to solve the problem of how do I have each service independently, deployable in any sequence, to production at any time? How can I make it so that. I can get really rapid feedback to test.

[00:17:24] So my tests are incredibly fast and efficient and effective. All of those and solving an engineering problem. You know, I've seen teams where they, they pulled down, their entire system makes up the rest of the system with virtual services. So they have full control over. that. Well, I mean, imagine if you had to pull down all the databases for all those services, because they're independent databases cause they're microservices, right?

[00:17:47] And then to run a test, you have to of course spin up those database from scratch. So you have a pristine set of data and all my God, I mean, just the time it takes, can you imagine a CI cycle like that?

[00:17:59] Yeah. Unfortunately, I've been in there.

[00:18:02] Well, I've never built something like that because I need, I want to know in seconds that I'm broken up minutes.

[00:18:09] Well sure. But that's one of the other things that, is sort of one of like the levers to pull on and this thinking is like, let's just say, you know, you're a developer, you could think like, Hey, maybe I'll run this whole system on my machine. It would work. But then how fast do you want your feedback? Do you want it?

[00:18:26] You know, minutes, second hours a day is like, choose an order of magnitude of speed and create the trade-offs that work for that particular objective.

[00:18:34] Well, the test fails because this service over here that we're not even messing with. Right. now. Now if I didn't spin it up correctly on my machine, it was broken at some point.

[00:18:45] Now I have to go in context, which wait for my work to go verify. to go play with that saying I'm not even messing with.

[00:18:51] Well, And are you even running the up-to-date version of that thing? You know, it's like the so many times has it happened. where Nobody has pulled down the latest version of this thing. And they did all this work and they tested it.

[00:19:02] It turns out like, oh, I haven't updated it on my machine for two months. And now it's out of date. It doesn't even work in the first place. It's like, there's so many reasons why that, like the premise is just totally screwed from the beginning. You know, it doesn't lead you down the right path at all. So...

[00:19:19] this is why solving the problem with CD is the thing that we work on is continuous delivery is not the goal, continuous delivery is the tool to make everything better because when you start going, okay, look, yeah, you're deploying every week now. Let's let's do it daily. Oh, I can't do it Why not? Right. Let's go solve the problem. Okay. For. CI, You need to get feedback in five minutes that everything is good or less preferably much less, right?

[00:19:47] That's the entire build everything on the Ci server. So your tests are gotta run in seconds because there's other stuff going on. Oh no, we why let's engineer this problem? Because these are critical quality steps in the flow that must happen.

[00:20:02] Yeah, I like that. It's just, so you start just asking why enough times and oh yeah.

[00:20:08] But you know, there's a status quo, but you can break it. You don't have to just keep with that, but you have to do things differently. So I want to get your advice on, let's say that, you know, a team they've had their monolith, let's just stipulate that they've done like DDD and they're kind of ready to start Splitting stuff off and they're thinking, okay, what do I need to have? How do I need to like, get my house in order for when I create the first, like Greenfield microservice? Like if I'm thinking about virtual services, like, what's your advice for teams in that situation?

[00:20:44] So let's, let's think about the whole problem.

[00:20:47] You said some monolith, is there's multiple it depends on the size of model. If we're talking about a monolith that's being seen by multiple teams or by one. team.

[00:20:55] Let's just say for the sake of discussion is one team.

[00:20:58] Okay. Let's use you problem. So what you do is you start, you've mapped out the business capabilities and you've got your domain diagram, which ones looks like is you find something that's relatively low risk as a capability.

[00:21:12] Okay. Figure out what the interface boundaries are going to be for that establish that contract. Establish how to get a peel out of that portion of the database, cause it needs to be to, to put a database. Okay. And then pull it out. I would recommend that you've got a way inside the code to flip between the old behavior that's still there and the service.

[00:21:33] So she can go back and forth. I'm going to assume that, hopefully, well, let's just assume worst case. You've got an untested monolith, in that case, you should test the monolith, That portion of the monolith is you're going to pull out and verify the behavior and then go and write a test for the microservice to verify it does the same thing.

[00:21:55] Right. But establish that contract once you've, once you are happy, that works and you're you're comfortable and you've stabilized that delete title code, and then think what's the next one's going to be. Don't do a big bang. We're going to rewrite the entire thing in microservices, because what's going to happen.

[00:22:11] You don't understand how to Operate them. It's not the same. There's a lot of complexity, you know, go read up on 12 factor app, go read up on things around instrumentation for performance instrumentation for how do we log better? I mean, how do we do everything better? Because you've just added a crap ton of complexity to operations to remove complexity from development, because each service is smaller and easier to, to think about.

[00:22:41] But overall it's a more complex operations. Yeah.

[00:22:46] Yeah. So let's continue this hypothetical exercise. And let's say that, maybe we have just like a well-tested well-built service and now another team is going to come in and they're going to try to expand this system out to another service. And you know, one service now depends on the other.

[00:23:04] We have already done all of this stuff to have their boundaries, do the testing and. They're thinking. Okay, well, I there, you know, clear that I know that my service depends on this thing. It's maintained by other team. I don't want a couple myself to disservice in development. I've heard about stuff like, you know, fakes, mocks, contract testing, virtual services, whatever.

[00:23:25] And they say, okay, yeah, I understand why we need this, but how do we do that? So what does that look like in practice, in your experience?

[00:23:32] In practice or what it should be?

[00:23:35] Well, maybe let's take the ideal and then see how it falls apart in practice.

[00:23:40] You know, Paul Hammons has a blog post about that. He's got several actually on the subject of technical compatibility test. And you know, what the ideal would be? Is it, if you are, depending on my service, that along with my service, I will also write a virtual service that matches the contract. I will test to my virtual services. Correct. And I will have that virtual service someplace where you can pull it down and use a virtual service to verify while you're building out that you've, you can match the contract. So that's the ideal is it I'm telling I'm such a great provider to my customers. I care about you so deeply that I want us to talk really, really well. And you don't have to do all the heavy lifting of trying to thing me. I will think of myself, and then you can go for it.

[00:24:30] Yeah. So this was one of the things that we're going to go a little bit far back in time here. But one of the reasons why I got really excited about Docker was that allowed teams who are building services to just hear here's a Docker image. You can just run my thing. It can be real. It can be virtual, could be whatever.

[00:24:45] It's easy to just hand these off to all these consumers that we things irrespective of, you know, the technical stack of the individual application, you know, like we're in a much better position now to achieve that. ideal than we were Before.

[00:24:56] Yeah, but it also incentivizes people to go to the anti-pattern and talking about the course of so easy for me to pull down your live service.

[00:25:05] And now I'm going to spin up a database. Now I'm going to test against the thing that, you know, you know, it's, you do need to do live integration at some point in the pipeline, but you don't going to be doing on your desktop unless you've got really stable services that are completely deterministic [...] .I mean, you know, and that's fine, but you know, when you're dealing with stableness, you don't want to be dealing with that. Right. When you're coding, it's just too much complexity.

[00:25:32] Yeah. So we've discussed the ideal here. And then what are some of the technical tools that people can use to create these, like the contracts or virtual services?

[00:25:42] Watermarks are good. ones. So our MOC is Java only though. And for what we do, we deal with so many teams. We really look for language non-specific solutions. So Mountebank is a node service, but it is just a node service. You can use it with any language. It's got hooks for lots of languages. We use that I recently came across prism, which is really cool. That's one where I can just take prism and pointed at an open API document and it starts to serve.

[00:26:11] And that's wicked. Cool. Another one that I really like is dread. So with dread, I can point an open API doc at it. That's mine, and verify that API doc with dread with no asserts. And well, except for the negatives. So I have to assert like failure modes, but for, for like 200 codes, I don't even have to write a cert.

[00:26:34] So it'll tell me if I'm broken or not. And so anything I can do to stop me from having to type, cause I'm super lazy and get the confidence. I need to ship 5:00 PM on Friday while I've got, you know, support on call rotation. I'm taking my wife out to a movie. That's what I want. You know, I want tools like that.

[00:26:54] Did any of those tools work with graph QL?

[00:26:56] I don't do a lot of graph QL. don't know.

[00:27:01] Yeah. The graph QL is kind of a new thing for me as I've not really used it, but it's sort of starting to just eat more of the world. It looks like.

[00:27:09] I'm trying to remember. because another area was using graph QL. We were looking at some of those things and I don't remember, we landed on it. But somebody does. There's a tool out there for it. Right? This is too common. A problem along with open API, even you've got tooling around, what's called a, asynch API. So now you can do pubsub with defined contracts and tooling around that as well.

[00:27:35] Oh, that's interesting. So you also mentioned, like what should happen in the CEI process?

[00:27:41] So, you know, you'll create some, you know, some virtual service, whatever, and then you'll run. Some tests against that. So are there some tools that can do that part for you? Cause I'm kind of imagining that there's two ends of this. There's the one which is, Hey, I have a spec document, start something for me.

[00:27:59] And then maybe the other end is you have something running validated against the spec.

[00:28:04] Yeah. I mean, I mean, we do it in our build. I mean, I'm able to just kick off my bills and the test we'll kick off the virtual service And tear it back down. I just do that in the test code. Like I said, it's got hooks where you can look the [...]or Boca or JAST or whatever, the hooks to start and stop the multiplying service. Right. It takes half a second.

[00:28:32] Mm, I see. Okay. So then,now we're coming back to the exercise, you know, you have, you know, service a and service, b, the producers of the services that are giving out like contracts and virtual services. And now, like it kind of, everything is fine almost because if you're the consumer, you can use these things you can develop.

[00:28:52] Almost, almost one. almost.

[00:28:54] So where's the remaining, the remaining percentage.

[00:28:57] How do you know the marks correct? How do you know the virtual services. Correct. You don't know. Right. And so you have to have a test for the test. So what we have is we've got, let's go back to the goals. We have continuous delivery, continuous delivery. If you're doing well, you can do release.

[00:29:13] The latest stream is on demand. And after you push codes, master and there's no human touches none. Okay. You also want to make sure that all the tests and the pipeline is deterministic as possible. So you want to remove as much state from the pipeline. most things you don't need state in the pipeline.

[00:29:29] You can verify all of your behavior on a stateless way. As long as you have a stink boy to verify the tests that are using festivals. And so what you do is you'll have an interface contract. So this is maddening for me is there's no domains language for testing, right? you know, if you asked me when an integration test is you asked, who are the people you're going to get five different answers, but one integration tests.

[00:29:54] is. Internally, we've created a glossary for integration tests. So I'm going to use our definitions. Our definition of an integration test for interfaces would be, I am doing an integration tests against them. Okay. I'm verifying the communication paths, but not behavior. So that's the pattern we use for integration tests.

[00:30:13] Contract tests would be, I need to go and verify that that mock is correct by using a live contract. And so in that case, you're dealing with a staple test. Well, I don't want in my pipeline, but I can still run it on a schedule. And depending on the volatility of your contracts, you can run the daily, run weekly, as you grow and run basically, and, and test against our contracts and you have to deal with the data set up and all of that stuff.

[00:30:39] But what you're really testing for is not the overall behavior of the system. You're testing again, is this API still valid? Right? Did anything break? Did my provider go and change their contracts and bridging way without. version it? [...]And so all of those problems, so you'd go test for those You have to have a test for the test, but it does allow you to daily deliver very, very fast. I mean, CI, we're talking about, I need to get on master's several times a day as a developer, and I can't spend all my time twiddling my thumbs waiting.

[00:31:12] So does this, work, like, let's say you have some Cron build of some CI or whatever, you know, midnight or whatever that time is, it's going to run some job that hates the service, like pulls down, like the, I dunno, some implant that serve you up the contract and then you have it from there and you load it into your. test suite or

[00:31:30] Yeah, well, the way run them actually is I'll, I'll use the same code, the same test code to run both tests. That's of course, if you have two tests that's in the same thing, one of them is wrong. Right. And so I'll, I'll just set the configurations. We either use the virtual service URL or UCI [...] .And then yeah, we just kicked it off on the schedule and it runs and then come back in and when it sails, you go and say, why did it fail?

[00:31:57] It, didn't say that, you know, you just have to go on discovered and failed because it faded, because the service was down or did it say all because the contractor's broken, you have to triage, right? And then you go on and all that as it was. And then when you get to the failure reason, and you fix that. and run it again.

[00:32:13] So in this scenario where you actually run the test against a live version of the service, you're naturally going to have to set some boundaries against what you can do in the test, because you don't want to be like manipulating data or calling some functionalities of that surfacing question.

[00:32:29] Right. So like where do you define the boundaries? Like what are you specifically testing in those tests against the live service?

[00:32:36] Well, it depends. Yeah. But, I mean, really, you're trying to do everything you can to give to the information that you want about the behavior of that interface without broadening out the scope so much that you start cascading and expanding the size of the trust. And so that's an engineering problem, but that's what you're trying to do is I want to focus on, is this interface? Correct? Do I understand the ski bar? Am I getting responses But you don't ever want to test for ice and you, I need a name and you responded with spread and I'm going to test I got right?

[00:33:15] Yeah, it makes sense. But I don't know. I find this to me. Like this approach is intuitive because first of all, like I love testing. I think it by default for me, test driven first. Anything else? is I can't even wrap my head around what that's like at this point. So by adopting this kind of workflow, it naturally fits TDD because you can only focus on the thing that you're actually responsible for the code that you're writing.

[00:33:44] And then you assume that the rest of the world is saying, you write your test against, you know, the specified behavior of whatever your integration points. is. And then, as you said, make sure that you test your tests, that you're have the correct definition of what this boundary is. Get that feedback loop as fast as possible, and then just put the pedal to the metal and go.

[00:34:06] But for other people, it just seems, almost like hearsay or an anti-patent idle. I am not sure how to sort of convey the importance of this to people who don't look at, see it, you know, full disclosure.

[00:34:20] I don't love testing. I love riding my motorcycle. I don't try it for a hobby. I code to solve business problems.

[00:34:27] Right. But you know, if I'm going to do it, I want to do it. Well, I'm also too lazy not to test and because I get tired of coming back and having to research the same thing over and over again, I really like new problems. And every time I don't write tests concurrently was true. I might end up with terrible code that I have to go and redo and undo.

[00:34:47] And it takes me like three times as long because I'm not testing. So I'm just too lazy not to test. Yeah. Right. And also...

[00:34:54] You like, you like new problems, you too lazy not to test.

[00:34:58] I mean, I, I like the problems. I like code that's testable. I like code that's readable testing code names since we write and readable and testable, you know, it's just a better way of working.

[00:35:10] Oh. And the other thing is. that My goal is to be able to compete in a marketplace with my competitors, which means that I need to be able to learn faster than they do, which means I need to go to production faster than they can, which means I have to have a really solid pipeline to get that done, which means I have to test my damn code.

[00:35:32] And also I'm a professional software developer. I deliver working solutions. In the scripture they can pump out code.

[00:35:41] Yeah. So true. I mean, you just kind of described. Just mirrored my own internal thinking, right? It's like, there's sort of, but the word that you use there, and I've actually said that exact same phrase in other conversations, which is, you know, you consider yourself a professional software developer.

[00:35:59] So that implies a certain set of requirements, a certain way of thinking about the quality of your work, why you're doing it, you know, and there's that whole chain of thought that kind of leads up to this. This is why I do all the things that I do. And I've not gonna Deviate from that ideal, because then I'm not doing what I'm supposed to be doing.

[00:36:20] Yeah. I think I'm moving that every software developer should watch is Jiro dreams of sushi. Master your craft, right. Have pride in your work. If you don't have pride in this work, just go get another job. Right. But if you're going to do this, let's do it the best we possibly can.

[00:36:40] Yeah, well, and that's why I think that these ideas are so important because this idea of, you know, keeping teams independent, giving them autonomy and ownership over their things, such as they can deploy quickly to production with confidence.

[00:36:55] It's selfish in the sense that if you work in that environment, like you, as an individual developer will kind of get that. Happy feedback of the work that I'm doing is delivering a value quickly, but then also empowers all of the other people and teams in your organization to do the same thing. It has a sort of natural laddering up into the business results, which we all care about, but same like kind of seem to get lost along the way for some like in some cases.

[00:37:22] Well, the other thing is that the thing that I've experienced, the thing that teams we've helped with experience is that if you're actually doing. this properly If you're focusing on CD, when you're focusing on two to shrink out the waste and you're focusing on testing to improve your ability to give feedback that you've got less support in you sleep better at night, you have higher morale.

[00:37:49] You get to find new ways to get things done. You're not always turning and turning and turning, and it's you have a happier high morale team. And that takes a lot of teamwork as a team. Then new continuous integration, it takes some dense teamwork and real CI. Which means that we work better together because we have to, and we like being together honestly.

[00:38:04] And, and so all of these skills are required to live a better life and have more humane working environment.

[00:38:13] Yeah, I know. It's funny that it's like when I started long time ago, it's just like a journey through like into DevOps and all this stuff that we're talking about now. It was not necessarily with the intent of creating a better life or more human working environment.

[00:38:29] I had this frustration, there was this problem and like, oh, Hey, if I do automated testing, that will solve this thing. Oh, and then this will get me this thing. But now I looking back, the thing that's like most important to me is just having like a happy and productive work environment and everything else can come from that because if you're not happy, it was kind of like what I got from the unicorn project and the five ideals.

[00:38:49] Like one of them being that psychological safety, this all kind of feeds back into like, you have to be in the right mindset, even approach. This work in the first place because of how much it really, it dictates from your mind and from how much you have to focus on it, you know?

[00:39:04] Yeah. I mean, it's focused, slow, enjoy, right?

[00:39:07] I mean, it's a more joyful way to work and happy developers. I mean, I talked to VPs about this all the time, that happy you may not care about happy developers. You might think that we just need to be beat over the head and go faster until you wind up on the front of the, of the New York times or any other national newspaper.

[00:39:26] What's the date of breach because we were afraid to talk to you about it, or we're too busy heads down, pushing up features to go and look around and keep our heads up and look out for problems. Happy developer development teams deliver more secure and better business solutions. Yeah.

[00:39:42] So have you, bringing the conversation back a little bit, but like, let's say that you, you know, you have a team and we're talking about like, Hey, you know, you should work like this, these virtual services contracts like boundaries, all this type of stuff.

[00:39:55] And, you know, there's this sort of initial skepticism like, Hey, this sounds like this is actually going to be more complicated. This is going to slow me down. Like, I don't really like care about all this stuff. What do you advise? And that scenario.

[00:40:11] I don't. I never come in with solutions. I come in. What's a problem. Problem is why can't we get to master? Why can't you ever develop on the team? Why can't we get to master? Right. Let's let's sit down and solve that problem. And then we say, oh, well, you've got this interface testing thing you're doing over here. How can we fix that? And then we said, well, there's, here's some options here.

[00:40:36] And these what the options do. And they pick one and then it's their solution. Right. Right. And so if a team doesn't want to live a better life, fine, you know, I'll go work with another team and what's a little better life. because they want to live a better life, then let's solve the problems together. And then we solve the problems together and the solutions become obvious.

[00:40:55] Yeah. That's so true. It's like when you work in these kind of in my experience, when you work in these kind of facilitating type roles, It's just Sort of collaboration, support roles, why beat a dead horse. If somebody doesn't want to solve the problem, then, you know, go work with somebody else is just a better use of the time. Like everybody will be happier in that.

[00:41:15] I'm way too valuable to argue as a developer who doesn't want a better life,

[00:41:23] I think that's something that I'll have to take to heart. It's like, they'll just ask you like, Hey, do you want to have a better life? Yes or no? You don't, if you can't, if you can't say yes right away, then I don't know about that.

[00:41:32] Sounds like it's too much effort, but.

[00:41:34] And if you think what you're doing it's fine, you're not going to test your code, you know, perhaps I can give you a flyer to the next job here for our competitor, because if you're going to drag somebody down, drag them down.

[00:41:47] Yeah. That's, that's my, that's the last resort, right? It's like, perhaps you'd be healthier, happier somewhere else. I mean, I have no power.

[00:41:54] Right. But still, I mean, it really does speak to culture too. And like expectations and the team, right? Like if you are in a team who's doing TDD and you're, some people are saying, I don't think we should do that, or I'm not going to do that.

[00:42:06] It's like, well, maybe you should go somewhere else then, you know.

[00:42:09] Like red team over there doesn't care about your quality either. You should go over there. On this team we care about quality.

[00:42:15] You can pick on one end of these things, which is like, Hey, maybe you want to go faster, like improve this level of quality, but it always kind of comes back to one of these aspects of engineering culture in a way, which is I think the hardest thing to actually change.

[00:42:32] You know, I wrote a, I wrote a blog post a while back, one of the, my second favorite subject after testing is metrics, because we have to measure things correctly to know how we're doing and know what to improve. Okay. And you know, one of the things about metrics is you're always supposed to measure them in groups because you can gain anyone or you can hurt things.

[00:42:52] Finally, focusing on one. But I had this blog post about the only metrics that matters. And it told the story of a trip. I took out to Nellis air force base, as a guest of a retired Lieutenant Colonel to go take photographs of the Thunderbirds. So it wasn't an air show and I watched the Thunderbirds go do their air show to launch two airplanes.

[00:43:10] The ground crew did the air show thing that they always do to launch two airplanes. Right. And the, the Thunderbird one broke. and they took about 10 minutes to do triage on it and launched them and Thunderbird seven, the standby point, just that's just business as usual, just doing our job, right. The high morale.

[00:43:30] And I summed all that up with the, the, the metric that matters is pride. Right? If you have a team that owns the problem, owns the solution and owns the outcomes of how they solve the problem. Right. Like good or bad pager goes off, so you answer it and they, then they care about their end user and they have pride in what they do.

[00:43:54] I'm going to have to think about that. Cause I really liked that. So one of the things. Sort of like, we talk about this, like separation of like monolithic microservices, is it comes down to ownership and ownership is one of those things that plays into, you know, an individual keeps pride in the work that they do.

[00:44:10] Cause you, you know, it's hard, it's hard to be proud of something that you're just kind of a smaller piece in them spread across all this thing. But if you can say, Hey, this is a thing that I did. This is what it does. This is who it serves and you can really have high pride there.

[00:44:25] Yeah, we built this. It's important for this reason.

[00:44:28] We believe in the mission we're serving with our code.

[00:44:31] Yeah. So that's what we gotta do. We gotta connect that down to the work that we do on the day-to-day basis. A hundred percent. Well, Brian, thank you so much for coming on. The show was a pleasure to talk to you about much more than I expected you would talk about, but is always fun.

[00:44:45] I love those kinds of conversations. yeah. Well, is there any advice or anything you'd like to leave listeners with before we go?

[00:44:55] you know, I'd encourage everybody to read the unicorn projects. It really, I think, speaks to the pain we have as developers when we're living in environment. So it should be better in some guidance on how to fix it.

[00:45:06] You know, I've got blog posts where I have rants that I've turned into hopefully positive outcomes on medium. If you look for this BD FIU, this team on medium, the five minute DevOps series, and I've got some really good ramps out there, especially the peer review. And anybody can reach me on LinkedIn.

[00:45:27] I love talking about these, happy to have an argument with somebody. If they want to say I'm wrong because I've got data to show them and I'm happy to present the data. You're not gonna hurt my feelings.

[00:45:37] Yeah. Well, and all of that will be linked at a smallbatches.fm. And I also have done an episode on small batches actually on the unicorn project.

[00:45:46] And, the Phoenix project is kind of a combo episode. so listeners check that out. If you're interested in that, I also recommend that you read those two books. And, one last thing I want to ask is I know that you've spoken a lot at conferences. I think you spoke at DevOps enterprise summit before.

[00:46:02] Is there any specific talks you have online? You think the listeners should check out?

[00:46:06] Well, not coming up. I, I should be at all day DevOps next year. And as soon as the CFP is opened up for a DevOps enterprise summit, I'll be submitting.

[00:46:17] I was thinking more about ones that you already done. Listeners could watch.

[00:46:20] Oh, I'm sorry. Yeah. there's several on YouTube. If you, for some requires someone I've done talk search 2017, 2018 to 2019, all the DevOps did a talk about why teams can't CD. You say you should also check out my wife's data sensor. She's got some good talks out there as well, and some we've done together.

[00:46:40] Oh. Yeah, power couple. Wow. I know what that's like

[00:46:49] now it's fun. It's fun because we play off each other. Well, when we're on stage together.

[00:46:54] Oh, that's cool. I'll have to check that out. That'd be cool to watch. All right, Brian. Well, thank you so much for coming on the show and, hopefully we can talk again some time.

[00:47:02] Yeah, really appreciate that. it was fun.

[00:47:04] All right, everybody. See the next episode,. You've just finished another episode of small batches podcast on building a high performance software delivery organization for more information, and to subscribe to this podcast, float to small batches dot FX. I hope to have you back again for the next episode.

[00:47:21] So until then, happy shipping.

[00:47:27] Like the sound of small batches? This episode was produced by Pods worth Media. That's podsworth.com.

Creators and Guests

Dev Environments with Bryan Finster
Broadcast by