Succession plans, unused features, and testing living systems
Hey. I'm Michael Dyrynda.
Jake:And I'm Jake Bennett, enjoying a wonderful Arnold Palmer lite half and half iced tea lemonade.
Michael:Actual almond palmer. Arnold. Arnold. And welcome to episode one seventy one of the North Meat South web podcast.
Jake:There it is. We're back. We're back.
Michael:Return. I have a I have a follow-up. I have follow-up.
Jake:Alright. Let's hear let's hear your follow-up to your most recent rant.
Michael:So last last episode, I said this coach is no good. He needs to go. We need to dismiss him. What the clerk has actually done in the meantime is formally announced his succession plan. So this year will be his twelfth, thirteenth, and final year at the club, and then he will ride off into the sunset.
Jake:Say.
Michael:What's No. No. They've already signed the next coach. They have signed the next coach. It's already They've given him a three year contract.
Michael:So
Jake:I'll believe it when I see it. I just have a feeling somehow he's gonna find his way back into coaching again. Don't You know?
Michael:Considering how things have gone over the last twelve years, it would not surprise me if something changes his mind. Because there was lots of fluff that came out of the club, lots of blowing hot air, and, you know, he's the right man, and he's still passionate for it, and he's excited. And we're like the thing the thing that ires me most, and I've I've been enjoying the word ire the last couple of months.
Jake:I like that. That's a good one.
Michael:Ire. That raises my ire above all else is that after all of these, you know, failure to capitalize on the best list we've ever had and the best players we ever had and all of that, we will be deprived of the coach being sacked. He's just gonna be allowed to walk off into the sunset. So
Jake:You wanted there be wanted to be some shame associated
Michael:with that. Like shame. Yeah. He's just such a good guy, and everyone loves him and all of the no. Look.
Michael:Hopefully, we had two of our key defenders get injured, so they'll be a little bit undercooked if they make the start of the season anyway, which is in, like, four weeks from now. So Mhmm. Hopefully, we'll just lose badly in the early
Jake:stages. And then everybody can just be really mad at him, and he can get the firing he deserves.
Michael:That's right. Just usher him out. So, anyway, we'll see what happens.
Jake:Oh, man. Well, I wish the worst for you then, my friend. I hope it goes terribly, and I hope he gets just shamed into oblivion and that there's, you know, no honor for him leaving on a good season. Just
Michael:We'll certainly boo him every chance we get.
Jake:Make him suffer. I'm just kidding. But, well, you know. Yeah. You know, I wish I had a rant to follow with.
Jake:I think I could let me see. Can I cook something up? Has has there been anything bothering me recently? I think that one of the things that I could have a rant about would be when you build things for people and then they don't use them, and then they come back and they say, hey. Could you build this feature?
Jake:And you mean the one I built for you, like, six months ago? That one? Is that the one you're talking about? Mhmm. But sometimes even worse, and I'm I'm just I'm ranting about this.
Jake:Like, this is a thing. It is a thing. It's happened before, but it hasn't happened recently. But what has happened recently is when you build a feature and you're like, okay, we're gonna release it for feedback. Like, we're I feel like we're far enough along.
Jake:This has some value to it, but it's not like in its final form, but I wanna get some feedback on it. Right? And so, you've released it to the world, and you say, okay, guys, go play with it. Whenever a month goes by, and you come back, and nobody's used it yet, but by golly, they have changes for you. There's literally not one record in the database yet that they've used the But they're like, you know, I think the reason why we didn't use it is because it would be really nice if it actually had this feature.
Jake:It's like, use the tool. Use the tool, and then tell me about what else do you want. Like, you haven't even tried it yet. Oh, man. That's just makes your blood boil.
Jake:I don't know. Maybe I'm in a small you know, Michael, I feel like you and I I don't know. I'm I'm trying to like evaluate the landscape here. Are the vast majority of people that are listening to this Nobody's listening to this. But is the vast majority of people listening to this, are they, you know, people who work on a specific product for like, you know, a long time?
Jake:Is it like, do you have internal users? Do you have customers? Mhmm. Or, you know, or is it like agency sort of work? Because in the case that it's agency work, like who gives a crap?
Jake:Like if they haven't used it, like you're getting paid to do the thing. Yeah. It doesn't matter. I mean, in some sense, like I could say the same thing about myself. Like, I'm getting paid to do it.
Jake:Who cares? Why are you complaining about it? But it's like, I feel like for me, like, I'm serving our internal users for a lot of what we're doing. And so, it does it does like irk you when you, you know, you sort of go out there and and build this feature that maybe you didn't have time for in the first place. And then it's like, hey, we want more changes to that thing.
Jake:It's like, no, you don't. No, you don't. You just need to use it.
Michael:Use it. Thing. And then then we'll figure out from there. Yeah.
Jake:Yeah. Yeah. Yeah. Yeah. That's my only rant.
Jake:That's all I got. That's as that's as as, you know, what's the word?
Michael:It's as ruffled as your feathers get.
Jake:Yeah. Like on Lego movie two, this is the thing. Well, she's she's brooding. That's what it is. Brooding?
Michael:She's
Jake:brooding. They're gonna yeah. She's like, you're get dark and moody and like brood. A brood session. A good brood session.
Jake:So that's as brooding as I get. That's that's about as far as it goes. So anyway, you had some stuff that you messaged about. You know, not often do we get on this and have, like, a bunch of stuff that we've got queued up to talk about. Usually, it's just kind of whatever bubbles to the surface, but you came prepared today.
Jake:So I'm gonna let you take the floor. Let's see what you got.
Michael:Cool. So and I wanna know what you think about this if you've been in this situation before, which I think maybe you might have given the kind of work that you do in your day job, and and maybe if any listeners have been in a similar situation. Essentially, we have this quoting platform where we have a panel of, you know, dozens of lenders that we integrate with that that will quote, that are on the panel, that we can send our brokers, customers to for the loans to to buy things, cars, property, tools, whatever. Not all of the lenders that are on our panel provide an API integration to quote for loans. Not all of
Jake:the
Michael:lenders that provide APIs provide endpoints to make quotes. Sometimes they just provide us endpoints to send a loan application, and it goes into their system, and then you use their system to do whatever we need to do. So the the tool the platform that we built that we use essentially provides an abstraction over all of these different lenders, and we create business rules based on those lenders' requirements in order to basically provide a a normalized view of, you know, the what the monthly repayments would be, what the fees would be, what the broker fees would be, what the establishment fees, like, all of this stuff. And so for each lender, a lot of this is is very manual. Once a week, they will send us or once a month, they will send us an email.
Michael:Who that email goes to, I found out this morning, could vary in the business. It could go from a lender's BDM, business development manager, to one of our business development managers. Sometimes they send it to our head of HR for some reason. Sometimes it goes to our head of operations. We're trying to standardize that so everything kind of goes into one place, which we then create a ticket in ClickUp, which will then get processed.
Michael:But the format of all of these naturally is different. Like, sometimes it's just one person emailing. Here's our updated rates. Sometimes it's our updated rate is this, and here is a PDF with, like, all of the additional details. Sometimes it's like a mailing list that just goes gets blasted out and and passed around.
Michael:So it is then the job of someone in our team to go through that PDF or that email and figure out what's changed and provide us we've got, like, this master spreadsheet, which has all the formulas in it, all of the values and things like that, which we then put into the platform and then code it up. And then we make sure that, like, given all of the same inputs, we get all the same outputs for all of the different scenarios. So there's, like, 20 or 30 scenarios. Now you would write an automated test for all of these scenarios. Great.
Michael:That's fine. It passes today. But then in a week when we get the new
Jake:rates real quick. When you say okay. Okay. So you're saying you've automated test for all these scenarios, meaning, like, give me give me three of them. What are the scenarios?
Michael:Well, say it's like a an individual person that is purchasing a car. Right?
Jake:Mhmm.
Michael:Or sometimes it's a joint application, so a husband and wife purchasing something. Or it's what we call, like, a sole trader, like someone who who works for themselves effectively as a contractor, and they need to buy a new computer. So that would put in a loan for that. Like, so there's all these different situations.
Jake:And so you'd say, like, if given that these are the circumstances under which they're asking for a loan in, they should get rates and things of this of this type from this particular broker.
Michael:Yeah. So from each of these lenders, like, this lender Sorry. Lender lender a, b, c Mhmm. D, let's say. So a and c are like, yep.
Michael:We can provide you a a quote. B gets knocked out. You know, you don't meet some eligibility criteria, and d is, like, we don't offer that kind of finance. So great. This this is fine.
Michael:But the problem with this is that these rates change. Right? Week to week or month to month or whatever whatever the frequency is for different lenders, the rules might change for different lenders. So we're always in there. We're making changes to the code when the rules change.
Michael:But the problem with with automated testing is that the inputs change. So in in a general test, you're saying, given this is the world that I build, these are the values that come out the other side. So you can make assertions and expectations around that kind of stuff. But the problem with these tests is that the expectations will change depending on, like, the business logic changing for a specific lender periodically, or the values will change. You know, how much they'll lend or what the the the repayment fee will be will change based on data that comes in.
Michael:Sometimes, some parts of this system are user managed by by our internal staff that go in and say, okay. We need to go and update the base rates. You know, it changed from 9% to 8%, or it went from, you know, eight and a half percent to 9.25%. These things happen in real time without our intervention, and that, like, these are just rate changes. They're not changes in behavior.
Michael:So in order for us to be able to determine that the system is working, we need real time data to say, like, yes. The current state of the world is accurate based on these inputs. So I guess where this comes into play and and, ultimately, my question is how how would you build an automated test system, or automated suite of tests for this system that requires real time values that come external to the system and can be changed by anyone at any time. So we don't know that it's correct. And and I think a lot of the base rate calculations are simple enough, but when the the logic to determine whether or not someone is eligible for a particular lender may change.
Michael:And so
Jake:Yeah. That's that for that for me seems like the most challenging part. And I think the idea too that like, you know, the the way that our brains are stuck thinking about it is typically in this like unit test world where we're used to running it sort of in isolation. And it's like, I think you're sort of outside of that a bit. You know, the situation that you're talking about is a bit outside of that specific scenario.
Jake:And I don't I mean, as I'm just thinking top of my head, off the top of my head, I don't know how you account for changes in logic without changing your code. Unless you can account for that change in logic by having a checkbox that that is some you know, that's the flag that changes it to be truthy or falsy or something like that. Mhmm. You know? So, like, if the person that's actually inputting the data is checking the box to say, yes.
Jake:It is, you know, we do allow this particular type of loan or no. We don't allow this particular what type of loan or there's some threshold that says if it's above this amount dollar amount for this particular line of loan, then we do not allow it. Or if it's below, you know what I mean? Sort something like that. So it's like, you have to it's a you can only be as specific in your tests as the UI that you're allowing your users to update can be.
Jake:So that that makes it complicated because it's boy, I don't know. Yeah. It's I'm trying to think of how it's it's it's more like a it's more like an end to end test than it is a unit test. You know?
Michael:And I think I think there's really two sorts of tests that we can write.
Jake:The yeah. The the other thing I was gonna say too is, like, there's that idea of, like, fuzz testing too. Have you ever heard of that? I think, like, Spassie talks about that. Okay.
Jake:So it's essentially, like, you have, like, happy path tests that you create, and then you have, like, fuzz tests that are like, try to break crap. Like, just literally try and break it. Like, you should, you know, just take every path you could possibly take down every avenue, down every road, and and make sure I don't get like some exceptions. I feel like Exploratory testing.
Michael:Yeah. QA QA we usually do that.
Jake:I think they call it fuzz testing. Like, it's it's a thing. Like, you can automate fuzz testing, though. Mhmm.
Michael:You're on mutation testing is kind of similar,
Jake:but maybe different. That's the idea.
Michael:Yeah. To go and, like, change things and see what happens. So there's there's, like, two kinds of tests that we can write, and I think the easier of the two is to test those business rules. Like, when the business rules change, you're changing codes, you're probably changing tests. And and, theoretically, you're changing the test first to say, okay.
Michael:This previously said it was okay. And and, realistically, you're just checking for Boolean switches. Like, if you're expecting that given this set of known inputs that the the applicant would either be able to get that loan or not. That's easy enough to test. Yeah.
Michael:I think the trickier bit is, like, testing the actual calculations, and they they calculated output. That given some known inputs, the output will be this because the variables change in a way that is separate to, like, the system itself. So if the if the interest rate, yeah, if the if the repayment amount sorry. If the loan amount is, say, $40,000 and the interest rate today is 10%, then the repayment amount would be, let's say, $500 a month. Okay?
Michael:But if the interest rate changes to 9%, you know, the calculation is the same, but you can't really make an assertion against the values because that changes in a variable way.
Jake:Yeah. It's right. It's it's man,
Michael:that is really tricky one. What you're actually testing. But we need to make sure that, like, when we apply all these business rules, that the things that come out the other side are correct at the time that we make those changes. So, you know, maybe you don't test you know, you kind of just ignore the fact that the variables in the system exist and just say, like, given what we know of the world today with these inputs and these variables, this is the expectation. And so long as those tests keep passing, the expectation and the reality of the system are correct.
Michael:In production, in the in the day to day usage of the system, when the variables change, of course, the values will will drift from whatever is in the test, but you know that the calculations are correct.
Jake:Yeah. I think that's I think that would I I think you're correct in that, like, what you actually have to test is you have to test that the business rules are being followed. So you would have to test the threshold to say like, so the example I gave earlier, which is like, okay. Given that this is this amount, and this is for this particular type of loan, and this is the repayment, and this is the interest. If it's above this, or if it's below this, then we do or we do not.
Jake:That's the business rule. That's how we test it. And we're like, we're sort of saying, here's the edge. Yeah. This is the edge.
Jake:And like, I'm gonna test one on this side of the edge, and I'm gonna test one on this side of the edge. And as long as those two pass, I can sort of guess or infer that everything in between there should also pass. Mhmm. And then, I think you actually could do something like fuzz testing, where you don't necessarily even have to care about what the particular inputs are that are happening. But fuzz testing basically allows you to automate automatically test it with invalid or random inputs to find bugs or errors or vulnerabilities or things like that.
Jake:So, you know, if you you know, that might be something interesting to look into because that could essentially account for any type of value that your people would possibly put in, you know. So you could say in any instance, any instance, doesn't matter. I should never get a negative repayment value. It should never happen. Mhmm.
Jake:You know what I mean? And then just like, okay, go to town. Try it. See if there's anything you can do that would possibly make that happen. Or you know, you could you could put like sort of those sort of guide guardrails on it and say Yeah.
Jake:These these tests here, these are like worst case scenarios, like, should always, it should always return a positive value, and then just start throwing random stuff at it. Mhmm. And let it let it go to town, and see if it can break it. If it can, then you know like, I don't have to wait for that magic instance when they put in something stupid and it breaks. I've tested that in advance.
Jake:I've tested every possible possible value plus every possible value they might not think of in advance to see if it'll break. Yeah.
Michael:And by break, we mean like throws an exception because it's gone out of bounds or something. Yeah. Not not like it should always just return true or false. It should never throw an exception or, you know Yeah. Not not know what to do.
Michael:It should always you know? If the method is to return true or false, you should no matter what the inputs are, always return either true or false. It should never throw an exception. You know, unless the method is specifically guarding for that and throwing an exception in specific circumstances. So, yeah, I think I mean, that will be the approach that I think I will take when we start putting these tests in.
Michael:So this this all lives in, like, an external system that we kind of API out to. Like, we manage the system. It's just that this is the last remaining piece of a legacy system that never got pulled into, like, the new platform. And so the work that I'm doing at the moment is to to merge those two galaxies together and see what happens. So
Jake:Yeah. I think I think maybe the idea too is, like, if and I keep saying fuzz testing. I promise it'll be last time. But if if you do that so what you might end up finding too is that you don't have strong enough typing on the inputs as well. So like, it might be that that is the hero of the day.
Jake:It's like, nope, this must be a positive integer that is represented as greater, you know, as between this value and this value. Like, those are the only acceptable values, you know? Yeah. And it might might be the the the fuzzing it actually tells you, well, you don't actually have really good protections on this. Is no typing that's checking for this type of thing.
Michael:Right.
Jake:You're allowing strings in here that are not numeric at all. Mhmm. It's gonna break at some point in the future if you don't fix that. And so maybe that's what it does is it basically forces you to have strict typing up front so that you can guarantee that your processes downstream are gonna adhere to your business rules because Mhmm. You are doing the type checks up front.
Michael:Yeah. Yeah. And I think getting getting the the test in place on a per lender scenario for those business rules will be will be the way to start. And then we just assume that the rest behaves itself, you know, and put tests in when we when we start hitting those boundary cases in production where, you know, given these inputs, some error, some unexpected state was reached and then gone write the test specifically to account for that with those inputs. Mhmm.
Michael:And, yeah, I think you're I think you're right. Just kind of avoiding testing the actual values is is gonna be the only way to there there's gonna have to be some manual step where we send it out to a review environment and get someone in the business or even just one of our developers to to go through when they're making those changes and hit the API and just be like, okay. Given I have all of these inputs, make sure that the outputs match what the manual spreadsheet is, and then that's just gonna have to be the degree of testing we do at that time. Yeah. Because because we have seeders and things like that.
Michael:But the the the problem with seeders is there's still a snapshot in time where you've still got users of the system going and updating. Like, we've got a a UI to go and update these base rates and these things, these variables that get put into the system so that we don't have to make code changes to support those things. So
Jake:Yeah. It is hard when, like, the test is specifically tied to the logic that you have written inside the code. So it's like it's sort of like a validation test, you know what mean? Where Yeah. You're not trying to test the framework, but at the same time, it's like, how do I check to make sure that these rules haven't changed?
Jake:Well, I have to sort of reflect in my test the code that I've written. So Mhmm. In my form request, I have that these are the five validation rules that are running there. And so in my test, I say, check that these rules are still present. So it's a spell check, you know what I mean?
Jake:Yeah. But it's like, it's still there. It's, you end up having to change validation rules, you're gonna have to go change the test. I mean, that's sort of what you're saying with like the, you know, there are there are manual things per lender that must be checked just based on like, hey, hey, they sent you an email and say, hey, by the way, no, we're we're no longer doing this type of loan. Mhmm.
Jake:Okay. Well, there's no automated way to say you could do that unless you have a configurable value for every single lender that, you know what I mean? You'd have to basically collapse all of that Mhmm. To say it's generic. It's genericized across the lender base and the thing is, that's actually gonna end up being more work than it is to just write the business logic rule and test it.
Jake:You know? Yeah. And so, you just gotta like deal with some pragmatism there too. Like, we're going we're gonna solve the problem in the simplest way we can without sort of, you know, turning our brains to mush, making this stuff all the exact same across every single vendor. Yeah.
Michael:Alright. Well, thank you for being a sounding board. I think that'll be the the approach that at least we start with. Because, like, this is this is an older like I said, it's just kind of been sitting off to the side because it works, and we haven't we've had more important things to kind of focus on. But now it's like, okay.
Michael:That thing is kind of raising some flags in certain, you know, pen tests and audits and things like that. It's how do we deal with that. Okay. We pick up all of the code that we need to make that specific functionality work, adapt the system that's using it to to use this new stuff. And then at least once it's part of the main app, then it's gonna be tested.
Michael:And and we can start saying, okay. Well, when you make changes here, you now need to write tests for it because there's no more Wild Wests over there.
Jake:Yep. Yep. Yep. Audits. Are you guys doing SOC two?
Michael:I think it's on the cards. I think we were
Jake:Are you guys doing something else?
Michael:Yeah. We do ISO 27,001. Like, we're
Jake:Oh, boy. That's a that's a thing harder even or or even harder to SOC two, I think.
Michael:So two. No. No. No. 27,020 ISO 27,001 is the, like, here's the list of stuff that we say we're doing, whereas Sock two is here's the list of stuff we say we're doing, and here is the proof that we're actually doing it.
Michael:Like, they can ask you at any time We've talked
Jake:to. The
Michael:orders that can come in and say, like, okay. Show me that you're doing this thing. And you have to be able to, like, show them that you're doing this thing. So whereas 27,001, you can kind of say, oh, yeah. We that's that's like a known gap.
Michael:Like, can just say, you know, this is a known gap, and we'll
Jake:look for It's an acceptable risk. Sure. Yeah. Okay. Interesting.
Jake:I think so I think we've typically called that like SOC two type one and SOC two type two. So type one is like, you know, yeah, we're gonna put all those things in place, but we may or may not be doing them, you know, and the stock two type two is after you've been audited on the things that you've put in place. And so Yeah. But boy, is it freaking expensive? It feels like Mhmm.
Jake:I feel I feel like we need to change auditors at this point because it's just every year they bumping it up. It's like, guys, it's literally getting easier and easier for you every year. We're going through the same exact checks, the same exact things. We've we've done it half the time it took last year, you know? We're doing the same stuff.
Jake:And so but every year it's more expensive. So I'm like, yeah, maybe we just
Michael:change it up. Unfortunately, you, yeah, you either need to find another vendor to do the audits. But in order to keep your certification current and I would assume keep with contractual obligations or keep certain third parties that you work with, you know, keep to keep working with you, then you've gotta keep your certifications up to date.
Jake:Yeah. You do. But you can I mean, like, you basically own your controls? You know, it's talked to you you say like, here are the controls that we have in place, and then the auditor is basically just reading through your controls and checking. Saying like, hey, here's the thing that you have in place to control this area, and then like, are you doing it?
Jake:Mhmm. Yeah, you know, GitHub has made it really nice. I mean, you know, you have the full history of everything, you know, and then you just have to put rules in place for all your branches to be sure that only certain people can manage, you know, merge stuff and then, you know, all the automated tests and everything have been super nice. Like Mhmm. You know, we're doing static analysis.
Jake:We're doing unit tests. We're doing feature tests. And I prove it. Okay. Here's every pull request we've done for the last six months.
Jake:You can see that every single one of them has been approved by a developer other than the person who wrote the code, you know, and that all that's able to be managed easily through get hub branches and protection rules and things like that. So it's good stuff. There's there's tools out there now for people, like if you're in a startup situation, I know that it feels like some of these startups have just sort of punted on this. And like Fathom Analytics, like, nope, we don't do SOC two. Sorry.
Jake:Tuple, nope, we don't do SOC two. It's not something we're interested in. It's like, okay. And the funny thing is, it seems like they've been able to just skate by with it. They have these huge companies who are like, hey, we wanna use Tuple, and they're like, great.
Jake:And they say, do you guys have SOC two? And they say, nope. And they say, okay. We're gonna use you anyway. It's like, great.
Jake:Like, I it's it's just hilarious to me how many they've been
David:able to just say, like, yeah, we don't get anything
Michael:like this. You've gotta have this certification. Okay. Well, I guess we're not gonna be able to take heavy business. Yep.
Michael:Sorry. And Yep.
Jake:And so then
Michael:And like
Jake:And they have like What they what they do have though is they have like a security page, like tuple and Yeah. What do they They have a security page where they say like, here are the controls that we have in place to make sure this is, you know, that we are being careful with the stuff, and here's the entire process of how our architecture works and all that. And it's like, I think I think then people look at it and say, oh, that's actually easier to read than a SOC two would be. Sure. Okay.
Jake:Let's do that. We're good. You know? Yep. Yeah.
Michael:Yep. Yeah. And it's and it's good because, like, in the in the case of Fathom Analytics, so their plans start at, what, $10 a month, $20 a month. Like, for $20 a month, I am not going to go through your arduous, you know, vendor onboarding process. Like, either you pay $20
Jake:a and you get what
Michael:you get. Yeah. The procurement stuff. Either you pay $20 a month and you get what you get or, like, go somewhere else. That's fine.
Michael:Right. So yes. Unlike Laravel Cloud who is doing all of that process. Yeah. And, you know, you can't really skirt that one, can you?
Michael:Right.
Jake:Right. So what I was gonna say, though, is there are there are companies, one called Sprinto. Mhmm. Which is actually pretty cool. And if you you you basically it's an express process to do sock two.
Jake:Right. So they will help you get set up. So like that sock two type one I was talking about. Mhmm. They will say like, hook up your hook up your, you know, architecture provide like, who's who's hosting your infrastructure?
Jake:Yeah. AWS. Great. Okay. Here's what you need to do.
Jake:Give us a auditing permission, and we're gonna go audit all the rules and everything. Oh, yep. Looks like you need to have CloudWatch turned on for this. You need to make sure this bucket's encrypted. You need to do this and that.
Jake:And you just go resolve the things. And then it says, okay, great. Do you have a policy for this? No. Do you wanna use ours?
Jake:Yes. Okay. Here's the policy. You know I mean? Like, it's, I mean, they make it, so you know, who's your version control provider?
Jake:GitLab, GitHub, which one? GitHub. Mhmm. Okay, great. Give us permission.
Jake:Oh, looks like you don't have branch rules turned on for this. Go ahead and turn branch rules on. Looks like you don't have tests automated, you know, automated tests running for this. You need to fix that. You need to turn on 2FA for all the people who have access to this.
Jake:Here's this user who doesn't. Like, it just does all of that for you. And so Yeah. What used to be, you know, a $50,000 investment to get some company to come in and tell you what you need to do and not even have them really do all that stuff. They would say basically to you, you need to run ScoutSuite on your AWS stuff.
Jake:Now, we're not gonna tell you how to do that. Just need to figure that out. No. Yeah. This company literally does it for you, and it's talking about maybe a 6 to $7,000 investment as opposed to 50 Yeah.
Jake:Now, that only gets you the type one. They help you get set up, but then you have somebody come in and do the auditing of it, and you know, they they still partner with people who will actually do it for a lot cheaper than these organizations who have been doing it the old school way for forever do. Because literally, you get a portal where they go in and say, yep, check check check check check check check. All the green boxes are good. You're you're done.
Jake:Like, because you can prove it. They've done all the testing for you. And so, it's really good. Sprinto. Check that out if you're if you're gonna do that SOC two stuff.
Jake:And if your architecture sorry, not your architecture, if your compute stuff is being used in one of the major providers. It just makes it really easy to check those boxes.
Michael:Sweet.
Jake:Yep. Indeed. We're at thirty minutes, my friend.
Michael:I think that's I think that'll do. I think that
Jake:will do.
Michael:Good one.
Jake:Absolutely. Alright, folks. This was episode one seventy two, I believe. Find show notes at northmidsouth.audio/170two. Hit us up on blue sky or at Twitter at jacobennett at michael drenda or at north south audio.
Jake:And of course, if you like the show, please feel free to rate it up in your pod catcher of choice. Five stars would be absolutely incredible. Folks, we are both going to LariCon this year. So if you have not yet bought your ticket, you should definitely do so. Save us a seat at the restaurant you go to.
Jake:Shoot us a text. Hit us up, and we'll be sure to come join you. Love to see you there. Thank you.
Michael:Great. See you all in two weeks. Also, is episode one seventy one.
Jake:One 70 one. Sorry, folks. Alright, everybody. Well, sounds good. See you later, everybody.
Jake:Bye bye. Bye.
Creators and Guests


