The Secure Developer

Training Security Champions With Brendan Dibbell

Episode Summary

In episode 79 of The Secure Developer, Guy Podjarny is joined by Brendan Dibbell, Application Security Engineer Team Lead at Toast, a restaurant technology company based in Boston, Massachusetts. Brendan shares how they manage cloud security at Toast and what the interaction between the AppSec and the engineering team looks like, and discusses their security champion program, how it differs from the security training for regular developers, and the benefits of having created their own curriculum. Hear how Brendan and his team measure the success of their programs, focusing on the progress rather than on a set of objectives, and talks about what metrics have and have not worked along the way.

Episode Notes

In today’s episode, Guy Podjarny talks to Brendan Dibbell, the application security engineer team lead at Toast, a restaurant technology company based in Boston, Massachusetts. Before moving into security, he spent years as a software developer, building mission-critical systems such as identity management, payment processing, and healthcare platforms, but has always been a vocal advocate for security. Brendan shares how they manage cloud security at Toast and what the interaction between the AppSec and the engineering team looks like, and discusses their security champion program, how it differs from the security training for regular developers, and the benefits of having created their own curriculum. Tuning in, listeners will hear how Brendan and his team measure the success of their programs, focusing on the progress rather than on a set of objectives, and talks about what metrics have and have not worked along the way. Later on, our guest explains why interrupting your workflow to solve every little risk that pops up is problematic and why it is far more important to stay focused on the bigger picture while not neglecting to address the smaller issues as you go.

Episode Transcription

[00:01:37] Guy Podjarny: Hello, everyone. Welcome back to The Secure Developer. Today, we’re going to try and go down a little bit of a journey of trying to measure security and just maybe handling security at a fast-growing startup. To talk about that, we have the team lead of the AppSec Engineering Team at Toast, Brendan Dibbell. Brendan, thanks for coming on to the show.

[00:01:56] Brendan Dibbell: It's good to be here.

[00:01:57] Guy Podjarny: Brendan, before we dive into indeed those journeys, tell us a bit about what you do and how did you get into security in the first place.

[00:02:06] Brendan Dibbell: Like you said, I’m the team lead for application security at Toast. For us, application security really means anything that lives in GitHub. That means, basically all of our static analysis scans, it means all of our dependency management and it means that any results that we get that are pertaining to vulnerabilities in the actual application itself.

We interact with the infrastructure that runs the application, only to the extent that it impacts the actual security of the deployed application. We're not there dealing with firewalls, routers, switches, etc. We're really focused on the quality of the code that our developers produce. In that sense, we're really a developer enablement team. We're really focused on making sure that the code that our developers ship is secure. We start to do that mostly by supporting our engineers, rather than fixing all of the problems that they create, which is a topic I’m sure we'll get into later as we talk about scaling and effectiveness and all of that.

In terms of my personal background and colors my philosophy on this stuff a little bit, I started out as a software engineer, and for many years was a software engineer building secure systems. I had this constant nagging feeling in the back of my head that we were never investing enough in security anywhere that I worked.

I was building identity management platforms. I was building healthcare platforms. I was building payment platforms and none of them really felt like we had enough eyes on them in terms of security. It always felt like as a developer, I was the lone voice for security at the table. But I only had just so much actual power and ability to execute on that, because by the time a project was finished, I was like, “This is technically done. I can ship this, but it's not really a mature product as security goes.” As a software engineer, I didn't really have the power to keep working on that. They're like, “Oh, you have another project we need you on.”

I kept internally being the advocate for security for a really long time, until such a point where I actually joined a company where I made enough noise about that for long enough that they were like, “Hey, do you want to just do a security thing full-time?” I said yes. Ever since then, I have been focusing on security, making plenty of mistakes along the way and that's how I got where I am today.

[00:04:21] Guy Podjarny: Oh, that's awesome. I think demonstrating a passion for it and saying that you love it is a really good way to do it. I’d imagine the software development background is helpful as you work with engineers that hopefully are similar to your disposition, although some of them might require a bit more encouraging.

[00:04:39] Brendan Dibbell: Yeah, it's definitely true.

[00:04:40] Guy Podjarny: Just the understanding, I loved your definition around AppSec, of saying that's anything inside of GitHub. Is there a peer team then that deals with almost a cloud security and containers and the likes?

[00:04:53] Brendan Dibbell: Yeah. Within Toast right now, we don't have a dedicated cloud security team. That is definitely something we're actually looking to potentially build out as we scale the entirety of the security organization. Right now, the way we handle cloud security is as primarily a collaboration between our security operations and tech ops teams, which other people might refer to as systems engineering, or devops. They manage our AWS infrastructure.

Our security operations team will work to assess the environment and try to figure out what if anything it needs additional work in the environment and our tech ops team is there. The tech ops team really owns anything related to patching and firewalls, so they should be good to go from the start.

I would say that we interact with the cloud security side of things primarily when let's say, a vulnerability comes in through a bug bounty program, or a penetration test and there's unclear ownership of it, whether it belongs within the application side, or the cloud side. They tend to get filtered through us, just because we have – when we're focusing on how the application works, fundamentally, we have the broadest oversight, because if there's a problem with the application, you don't really know right away, whether it is due to something in the networking configuration, it's something due to the application code itself. We tend to take in all of those vulnerabilities. In the end, the actual infrastructure itself is primarily owned by various other systems engineering teams.

[00:06:23] Guy Podjarny: Got it. Maybe moving a little bit from within the security team, or I guess, you've already touched on the relationship between them and maybe the operations, or the tech ops teams. How do you define the interaction between you and the engineering team? What is it that you do, versus what is it that you expect the engineering team to do and how do you engage there?

[00:06:45] Brendan Dibbell: Yeah. That's a really interesting question and one that has changed a lot. If I were to name the one mistake that I made early on in my application security career, it's that I spent way too much time trying to do way too many things. That meant that I was spending too much time getting into the weeds on specific vulnerabilities. People were not being responsive on engineering teams and I was just like, “Okay. I’m just going to try to find and fix all of the problems.”

Slowly since then, I’ve started to push away from that and really focus on the step in between vulnerability discovery and vulnerability remediation, which is really engaging with development teams saying, here is why this is important to fix. Please fix it. That stage of vulnerability management is a pretty big piece of my job.

Also, really focusing on developer empowerment. The way that we've shifted the way the team works is instead of focusing on doing all of the things, we really want to focus on how do we give our engineers the tools that they need to take ownership of security? When I look at the engineering organization and I see this at almost every organization, every security engineer I talk to has this problem, where you have a few security engineers up against a whole huge bucket of developers and there's no possible way for you to keep up with all the changes.

We've really focused on building a security program that is sufficiently documented, that provides all the tools and technologies that our developers need to be successful and then provide our engineers with guidance on what it means to be successful in the program. I’m sure that you hear from a lot of different people with a lot of different versions of “security champions program.”

The security champions program for us is a huge piece of this puzzle. We have been changing it over the years. The most recent iteration of it, we actually have a much more formalized program than a lot of folks, where we actually have not just people who are advocates for security within their domain, but actually it is a specific defined role. This role has a set of responsibilities and a set of metrics that we track.

When we say a security champion for us, it's a person who is actually responsible for taking over some of the day-to-day security work. Instead of us looking at all the results out of the static analysis tools, can we sufficiently train and engage individual development teams to take ownership over those tools and only interact with us when a higher level of subject matter expertise is required. That is the biggest piece of what I do. It really is the developer training, developer empowerment and building out all of these security champions.

[00:09:37] Guy Podjarny: Yeah. Well, that sounds really interesting. These people, when you talk about this being a formal role, what percentage of their job is this? I imagine, this is a developer within the team, within the engineering team, who is now a security champion. What percentage of their time is allocated to this type of work, the security work?

[00:09:55] Brendan Dibbell: The current allocation is – so it is defined, essentially in fixed time buckets to some extent right now. Again, this is all very much in flux and we are constantly evaluating whether this [inaudible 00:10:07] is correct. The “approved range” for it right now is between 10 and 20% of a developer's week. That usually turns into about – so that's four to eight hours. That'll be like, oh, they'll spend their Fridays or two hours of Monday, Wednesday, Friday on just security work and they work with their managers to define what those time buckets are and make them fit in with the teams. It varies depending on the person, but it's usually less than 20% of their time.

[00:10:41] Guy Podjarny: What's the rough ratio you're shooting for? Is it one per team? What's the security champion to non-security champion ratio that you're trying for?

[00:10:51] Brendan Dibbell: That is an excellent question and it's one I’ve really struggled with. The way we try to define the scope of ownership of a security champion is not by team. It's not by line of business, or anything like that. It's by code that you own. Part of the reason that we do this is because there are some teams that are really large and may need more than one security champion, or there are larger chunks of code, larger projects that are very – have unclear ownership, things like legacy projects, and then a lot of projects that just don't seem to have any teams responsible for them at all, things like dead code, but where someone might be an expert in that particular area of code.

The way we typically define it is by code covered. We are looking for a really good way to track this. At the moment, it's really pretty loose and we just have a big document of here's all the code that everyone is responsible for. In terms of the actual amount of code, it's really difficult to say. I would say that on average, the security champion owns slightly more than an engineering team.

Most people do have the same scope of ownership, or similar scope of ownership to their engineering team. Some people will have the scope of ownership of two smaller engineering teams. In the end, I think the way we've actually described it is we are a microservices model, we are also a many-repo model. We are not doing the whole mono repo thing, not on that [inaudible] just yet. It's usually around three to four repositories of microservices that an individual champion might be responsible for. Plus, maybe a little bit of code, some package within a larger artifact.

[00:12:37] Guy Podjarny: Yeah. Okay, interesting. I mean, I think there's really no one way to do it, but I like the structured model towards it. It sounds there's specific training that these champions undergo as well once, I guess, anointed a security champion?

[00:12:54] Brendan Dibbell: Yeah. That's also something we're working on evolving. We have done some training initiatives in the past largely. When we teach normal developers about security, we're primarily teaching them the common flaws, here's how you prevent writing them in code, so on and so forth.

When we think about training security champions, they have to be able to not only be able to identify where flaws exist, but also where flaws might be introduced.

One of the big things that we're working with all our security champions on is how do you threat model? We developed an internally designed framework for threat modeling that security champions are meant to engage with regularly. Security champions should be developing these threat models and documenting them whenever major changes happen. Those threat models really form the basis of their knowledge on how to think about security, as well as specialized training for things like, hey, how do you do a secure code review?

These people, whenever a security is not required to have eyes on things, which should be very rarely, given that security teams do not have the time to review every code change, this person should be looking at code changes with an eye for security. We work with them to develop, basically a checklist and some just general guidance on how do you do secure code review.

Then we do the normal stuff. We're planning to do periodic – you hear various terms for these things now, like cyber ranges is a hot term. We do want to teach our security champions in a slightly more offensive way than we teach our standard developers, because we want to really get them in the mindset of attackers, so that they have a really good understanding of how vulnerabilities exist and what their impact is.

[00:14:42] Guy Podjarny: I love the term cyber range. Maybe I’ve been hiding! I haven't heard that one yet. Like a shooting range, but cyber minded.

[00:14:51] Brendan Dibbell: Yeah. I think it's a really great idea. I think a lot of the products out there are not necessarily where I would like them to be. One of the biggest focuses I’ve had for training security champions and all individuals at Toast. I’m sure other people do this as well, but one of the best decisions I think I’ve made is to throw out our off-the-shelf training curriculum that was just given to us by some e-learning vendor and just build our own.

One thing that we did is I took all of the vulnerabilities that we had found over a six-month span in the platform. I picked a bunch of them and I made videos demonstrating how they are exploited and what the impact is. Then I took those and I said, all right, this is the vulnerability, this is the impact. Then show where that exists in code.

Now that I have found this vulnerability, here's how it is in code, here's how it was fixed. This was published six months ago, so I feel comfortable sharing this weird, crazy vulnerability with you all. Keep in mind that this is what the impact is. I think, having training that focuses especially on impact and is very relevant to the specific environment you're in, it doesn't necessarily have to be within your product, but it should A) be impact-focused and B) be relevant to the environment.

[00:16:09] Guy Podjarny: Yeah. No, I love that. I think the team from Segment, Leif and Eric who were on this, were also talking about how they're using historical vulnerabilities for this and I think it’s a very effective technique. Let’s switch. Thanks for describing the structure of the team and then the security champions and the growth of it, which is indeed, I think for good reasons, a growing practice.

Then we get to everybody’s favorite topic, which is how do you know if you’re doing it well? When you’re looking at all this program, how do measure its success?

[00:16:39] Brendan Dibbell: Yeah, it’s a really difficult conversation obviously. I don’t have any easy answers for it, other than to say that the most important thing that I have found with my time is not to measure security as an objective; we are secure, we are not secure. But to measure security in terms of progress. It depends on what you’re looking at. I’m going to take the security champions program just as an example since we’re on the topic.

When I measure the success of a program, I’m not necessarily measuring the security. I’m measuring the actual individual program’s outcomes. What we essentially have is a set of metrics that are in place for security champions to say, all right, have you resolved all of the issues that were given to you within some time frame based on risk? Have you resolved all of the issues that are reported to you by SSDLC tools?

One of the biggest things that I see happen with SSDLC tooling is that people are not incentivized to get down to a baseline. How effectively have you established that baseline, is one of the metrics that we have, which is measured basically by open vulnerability count, which is something that everyone looks at, but it’s really easy to misinterpret the open vulnerability count as a measure of security of the product, instead of a measurement of how well you’re implementing the security program, because I may not fix all 3,000 vulnerabilities that come out of a static analysis tool.

There is a very good likelihood that 80% plus of them are false-positives, another 18% are low-risk and then the last 2% are really things that we need to fix. When I take that down from that 3,000 issues to the last, say 50 issues or so that are real and we really need to fix them, that didn’t increase my security 90% plus. All that really did was show that I actually put the effort on the program and made sure that I whittled down the actual number of issues that I’m staring at on a day-to-day basis, so that I actually know what is important to work on going forward.

Yeah. That is the biggest change in how I think about things is towards thinking about security in terms of program effectiveness, versus thinking about it in terms of how do we measure security of the product, because I don't think measuring security of the product is a feasible goal for measuring.

Then, I guess, the other thing that I have found is that it's really important to be consistent in your metrics. Say a year ago, we implemented an initial set of metrics for measuring program health. These are things like SLA on-time percentage. Across your developers and within individual lines of business, how well did your developers do at resolving vulnerabilities within the timeframe that you theoretically have determined according to risk? That was a metric.

Then, we did metrics around things like open issue count. We did things around vulnerabilities discovered versus resolved, average time to discovery, average time into mediation, remediation, so on and so forth. We got all of those metrics in place.

Then we changed things over the course of a year that made the metrics less valuable. For example, we changed the SLA timeframes. When we changed the SLA timeframes, we found that the metrics did not necessarily correlate, at least in the way that we had built them, in a way that necessarily made sense. Also, we determined that measurements by team fell apart as teams grew and shrunk and split and dissolved and all kinds of things.

We found it really difficult to maintain consistent measurements over a year of change within the organization, because you both change the organization and product that you're measuring and you end up wanting to change the things that you're measuring, because you didn't like it the first time around.

Consistency in the things that you're measuring when you're measuring progress is more important, I think, than consistency in your measurements when you're trying to measure security, because we're not saying it's 99% secure. We're saying, this program was 90% effective. Yeah, I think that those are my biggest takeaways.

I’m happy to dive a little bit more into the metrics and what I frequently described as reading the tea leaves, because I think there are a lot of insights that can be cleaned out of things, like time to discovery and time to remediation. Those on their surface without additional analysis are really difficult to interpret.

[00:21:42] Guy Podjarny: Thanks for the review. I think this theme of how well are you deploying a security control, if you will. If you think about a security program as a control, versus the effectiveness, or efficacy of that control, it's definitely recurring. I’ve heard it come from the security leaders at Pinterest and how it worked at Lyft, that's just recent event. It definitely comes across a lot.

Before we dive into those metrics and how you do it, do you feel there's a need to measure the efficacy of the control itself off to the side? Like, how well are you prioritizing? I mean, what's the control group here or the balance to help you figure out if you're putting the right program in place in the first place?

[00:22:24] Brendan Dibbell: Yeah. That's an excellent question. I don't know that I have a good answer to it. I would say that I actually mentioned the time to discovery on vulnerabilities. I mentioned that one of our metrics for program health is vulnerability count. When I look at the success of us building the program, one of the key metrics that I use in determining that is time to discovery.

When we discover vulnerabilities, how many of them were there five years ago when we first built the product and hadn't been discovered for a long time? That is one measurement I do look at for overall program outcome. I also look at the total number of vulnerabilities discovered, which is a really, really difficult metric to build and interpret, because you're constantly as a security engineer, interested in the security of the overall platform, turning a lot of knobs and levers to try to make sure that number is correct, because that number being low means one of two things. It either means that you're doing a really good job and your security program is not introducing a lot of new vulnerabilities into the program. Or, it means that you're not testing it enough.

We're constantly tuning the testing levers up and down especially. When we're tuning those testing levers up and down, it makes that number much less meaningful. There's no easy way to correlate those numbers with changes in those other knobs and levers. We do look at those metrics and we think they're important, but they always have to be taken with a grain of salt and put in the context of other changes that you make to the program, because otherwise, you're just going to be as I said, reading the tea leaves.

I think people try to see what they want to see in those numbers frequently. If I start a bug bounty program, and this is a real-world example. This happens to a million people who start a bug bounty program. You get hundreds of issues reported in your first 90 to a 180 days of the bug bounty program and then they start to fall off.

Then you reach a point where you're like, “Okay. We must be doing really well, because I’m not seeing any of these issues coming anymore.” It may just be that people have lost interest in the program. Be careful not to look at those numbers and read them at face value and say, “Oh, there must be fewer vulnerabilities in the platform,” because there may be other factors.

[00:24:59] Guy Podjarny: Yeah, it's a really tough incentives problem almost. You're almost incentivizing yourself to not employ new vulnerability discovery solutions if you are measuring your success by the number of vulnerabilities you have. You find yourself in an ignorance is bliss type of model, because that's how you incentivize. Harshly, I’ve actually even seen places that literally have some bonuses, like they’ll have that as a formal KPI tied to individual, which definitely has some fairly significant flaws.

[00:25:30] Brendan Dibbell: Yeah. On the other side of things too, we introduced something recently in terms of our mean time to discovery metrics, where we started flagging issues that were discovered pre-application release. If it was discovered in secure code review, or if it was discovered at change control time or something, if there's some checkpoint in the process outside of tools, outside of something to block the build, there's some checkpoint in the process that stopped a vulnerability before it went live. We track that as a zero-day time to discovery.

We are really incentivized to find those, because a much lower time discovery looks really good on our metrics. The second that we introduce that, it becomes just a hunt to find all of those and not a hunt to track down the technical debt. Just because we're decreasing our time to discovery and finding all these zero-day time discovery issues, doesn't mean there aren't still five-year-old issues in the product. It just means that we're focusing how we look for them on a way that is more effective at catching issues that are introduced during development than in discovering technical debt.

[00:26:38] Guy Podjarny: Yeah. No, absolutely. Complicated topic. These are examples of metrics that did work. Do you have examples of metrics that you've tried and you've veered away from? I guess, we talked a little bit about just the overall vulnerability count, but other examples of iterations you've left beyond that people should try and avoid that mistake themselves?

[00:27:00] Brendan Dibbell: I think the number one metric that we've moved away from and maybe it works for some people and I have no problem with programs who look at this metric and say, this is valuable to them. One of the metrics we have moved away from is time to remediation in general.

We have essentially looked at that and said that – We've done all sorts of things with this. We have tried to break it down by severity. Time to remediation for high severity, time to remediation for medium severity and trying to measure those and get those numbers down. We've moved away from that largely because of noise. When you're measuring averages over an application vulnerabilities that get discovered across different lines of business, have different severities, like teams interact with vulnerabilities differently, it was a slightly too noisy metric and it also didn't really work well for teams that had say, two issues for them got reported in a quarter. It's not a lot of data points.

We've moved mostly away from these time-based metrics into SLA adherence. When we look at how well teams are doing at remediating vulnerabilities, if you did all of them within SLA timeframe, regardless of how long it took you to do them, that is a 100%. You did everything that you should have according to what we told you to do, which is a much more valuable number than when someone gets something reported that takes them two years to fix, because it's an incredibly complicated architectural change and low-severity. They fix that and it's like, well A, what quarter does that measurement actually fall in? B, that's going to skew your metrics.

Focusing on did you do it by the time that we told you to is I think the biggest change and has been really useful to us, especially since it is easier for teams to understand.

[00:28:58] Guy Podjarny: Yeah. That's super interesting insight. I was almost worried almost when you started this off, but it sounds much more insightful than that, which is it's not that you don't care about how long it took to remediate the issue. You just make it a bit more binary, is it within the SLA, or is it outside the SLA to clean up the data and allow you to maintain that over time.

[00:29:18] Brendan Dibbell: Yeah. It's also important to us, because A) every issue is different and B) SLAs are good because they allow teams to keep working. I think that one of the things that I have found with a lot of tools is that they're very focused on building security gates around does an issue exist, or does it not exist. Instead of does this issue pose risk and has the team been working on this, so on and so forth. This is actually something I’ve talked with Snyk, obviously a lot about and how do we introduce this concept into our SDLC?

When we look at build blockers and at our SLAs, technically according to our SLAs, a team should have 30 days to fix a medium severity issue. We don't want to block the build on all medium severity issues that come in through the SDLC immediately, because it's really disruptive to a workflow, especially in the case of dependencies where one day, something is not a medium issue and then the next day it is and their build starts failing.

We do want to prevent them from breaking SLA. One of the things that we're actively working on, it's not in place yet, but it's instead of breaking the build on medium issue exists, it's break the build on medium issue has existed for over 30 days. That's just some theoretical SLA. That's our actual SLA for these issues. If you define your program in terms of these binary SLAs, it makes both measurements and build go-to production decision points much easier.

[00:31:01] Guy Podjarny: Yeah. I love that approach, indeed, about the pragmatic angle in it. At Snyk in the early days, we added this snooze capability that had far less structured mindset to that, but it was basically around that same element of giving a dev team the choice to say, “Hey, I found this issue. I get it. I don't want to ignore it. I just don't think it's important enough for me to stop the presses and not proceed, so I’m going to snooze this for a bit and continue.” Definitely, a lot of evolution has happened since.

[00:31:29] Brendan Dibbell: We use that a lot, especially when we get dependencies, like frameworks that have transitive dependency issues. For example, I’m sure a million Java developers have the same experience as I do right now. Everything includes Jackson and Jackson has a million issues, because it's a deserialization library and deserialization components always have a million issues between that gadget and RCE and all these. Yeah, all these categories of things.

Also, consequently of it being so popular, is that it is included as a dependency on all these other things. When we introduce something that includes Jackson as a dependency and that has a vulnerability, but the actual thing hasn't been fixed yet, I don't want to deal with the, “All right. I’m going to do some maven, or gradle magic to include the right version.” I just want to wait for there to be a new release of whatever library I’m using.

We use snooze all the time to make sure that we are reminded about it at some point, but that we can continue working while we're waiting for a new release to happen. That goes back to the thing I mentioned before, which is the goal is not to have zero problems. The goal is to get the noise down. If we're consciously making a decision to suppress a warning about a vulnerability, as long as it's documented, that's fine and as long as we have a reasoning for it.

I see too many people too often get hung up on this idea of we have to fix all of these 3,000 issues right away. It's like, no, you can make a conscious decision not to do that, but you should take the time and do that. Just because you have 3,000 issues, doesn't mean that it's a binary decision you're going to fix all of them or not fix all of them. You have to go through, triage them and feel comfortable with marking things as won't fix, or as false positives, or as there's some other mitigation applied. Go and just take the work and do it.

[00:33:27] Guy Podjarny: Yeah. No, well said. You have to be practical. It's not about perfection. It's about doing the right thing. I’m tempted to ask you. You've had some great answers, ask about celebrating success and about – all sorts of other aspects. I think I might need to have you on the show, or have a bit of a write-up, because I think we're a little bit out of time.

Before I let you go though, I’m going to try and squeeze one more bit of advice from you. If you're meeting a team, looking to level up their security prowess, what would be one bit of advice you’d tell them to do, or to stop doing?

[00:33:58] Brendan Dibbell: I would say, that the biggest thing that you can do as someone building a security program is to not try to take ownership over too many things. Far too often, and I mentioned this at the start of our conversation, people make the mistake of trying to do everything and they make the mistake of trying to solve security themselves overnight. I would say that people should focus on two things; focus on helping other people take ownership of security, so that you have ownership over fewer things in security and focus on taking security one step at a time.

You don't have to be better overnight. You can take baby steps. It's going to be okay. Just work with your team to make sure that you have a clear and concrete path forward, but that you're not just trying to do everything yourself. Because if you are, you're going to inevitably drive yourself crazy and never get where you want to be.

[00:35:01] Guy Podjarny: I think that's very sound advice. I think, especially in the world of security, it's easy to get overwhelmed if you just try to do everything at once, it feels so scary to put things aside.

[00:35:11] Brendan Dibbell: Yeah. I won't say that I have never panicked when I discovered a security issue. I’m not immune. There are definitely times where I discover something and it's like, “I have to fix this.” It really helps to take a step back and think about the bigger picture, because the more that you interrupt your plans to take whatever feels like the hottest potato at any given time – it's not the right analogy, but you know what I mean. You're always looking, you're always going to gravitate towards what you think is the highest risk.

In security, something new is always going to the highest risk. There's always going to be a new threat out there. There's always going to be a new vulnerability that you can jump on. Don't lose sight of the bigger picture, just because something new and seemingly important has jumped out at you.

[00:36:04] Guy Podjarny: Yeah. No, no. Very much the case. Brendan, thanks again for coming in for sharing these learnings, I think, just packed with great and practical advice here. Thanks for coming on.

[00:36:14] Brendan Dibbell: Thank you.

[00:36:15] Guy Podjarny: Thanks everybody for tuning in. I hope you join us for the next one.

[END OF INTERVIEW]