Imagine if AI could detect and fix vulnerabilities in your code faster and with greater precision than ever before. That future is already here! In today’s episode, we’re joined by Berkay Berabi, an AI researcher and Senior Software Engineer at Snyk, to dive into the cutting-edge world of AI-powered vulnerability detection. Berkay offers insight into how Snyk is leveraging a hybrid AI approach to detect and fix vulnerabilities in code, combining human-driven expertise with machine learning for greater accuracy and scalability. He also introduces CodeReduce, a game-changing tool by Snyk that strips away irrelevant code, streamlining the detection process and addressing the challenges posed by complex, multi-step data flows. Through rigorous model testing, Snyk ensures that AI-generated fixes are validated to prevent errors, making the process faster and more reliable.
Episode Summary
Imagine if AI could detect and fix vulnerabilities in your code faster and with greater precision than ever before. That future is already here! In today’s episode, we’re joined by Berkay Berabi, an AI researcher and Senior Software Engineer at Snyk, to dive into the cutting-edge world of AI-powered vulnerability detection. Berkay offers insight into how Snyk is leveraging a hybrid AI approach to detect and fix vulnerabilities in code, combining human-driven expertise with machine learning for greater accuracy and scalability. He also introduces CodeReduce, a game-changing tool by Snyk that strips away irrelevant code, streamlining the detection process and addressing the challenges posed by complex, multi-step data flows. Through rigorous model testing, Snyk ensures that AI-generated fixes are validated to prevent errors, making the process faster and more reliable.
Show Notes
In this fascinating episode of The Secure Developer, host Danny Allan sits down with Berkay Berabi, an AI researcher at Snyk, to explore the groundbreaking CodeReduce technology and its implications for software security. Berabi, who transitioned from electrical engineering to AI research, shares insights into how Snyk is revolutionizing vulnerability detection and remediation using artificial intelligence.
The conversation delves deep into the technical aspects of CodeReduce, explaining how this innovative approach reduces complex code structures by up to 50 times their original size while maintaining vulnerability detection capabilities. Berabi explains the sophisticated process of code reduction, analysis, and fix generation, highlighting how AI models can better understand and address security vulnerabilities when working with simplified code. The discussion also covers the challenges of different AI models, from T5 to StarCoder and Mixtral, exploring their varying capabilities, accuracies, and performance trade-offs.
The episode critically examines the future of AI in software development, addressing both opportunities and concerns. Berabi and Allan discuss recent findings about AI-generated code potentially introducing new vulnerabilities, referencing Gartner's prediction that by 2027, 25% of software vulnerabilities could be created by AI-generated code. They explore how tools like CodeReduce and other AI-powered security measures might help mitigate these risks while examining the broader implications of AI assistance in software development. This episode offers valuable insights for developers, security professionals, and anyone interested in the intersection of AI and software security.
Links
Berkay Berabi: "We actually also keep track of the derivation. The derivation in that context basically means why we think something is vulnerable. So, we not only say to the users, 'Hey, this is vulnerable,' but we also show them basically the flow that we extracted and why we think this is vulnerable. Since we know this derivation and we can also basically query the analyser multiple times after maybe we changing something in the code. What we can actually do is we can maybe try to filter out the irrelevant parts in the code. After we do it, maybe we can ask the analyser again, 'Hey, do you still find the vulnerability?' If yes, okay, maybe those things that we just removed, they were indeed irrelevant."
[INTRODUCTION]
[0:00:42] Guy Podjarny: You are listening to The Secure Developer, where we speak to industry leaders and experts about the past, present, and future of DevSecOps and AI security. We aim to help you bring developers and security together to build secure applications while moving fast and having fun.
This podcast is brought to you by Snyk. Snyk’s developer security platform helps developers build secure applications without slowing down. Snyk makes it easy to find and fix vulnerabilities in code, open-source dependencies, containers, and infrastructure as code, all while providing actionable security insights and administration capabilities. To learn more, visit snyk.io/tsd.
[EPISODE]
[0:01:23] Danny Allan: Hello and welcome to another episode of The Secure Developer. I'm super excited to be back with you today. It's Danny Allan, the CTO of Snyk. Today, I have a very special guest. I am joined by Berkay Berabi, who is one of our AI researchers here at Snyk, He's been with us for four years and has been very involved in a very new research area that we have published in our patenting around CodeReduce. So, we're going to get into that. First of all, maybe Berkay, if you can just introduce yourself for the audience.
[0:01:54] Berkay Berabi: Yes, sure. Hello, everyone. I am Berkay. I'm very happy to be here today. Thanks for having me. Yes, have studied computer science and mostly AI and machine learning, and then I joined Snyk, and I have been here in the machine learning team for around four years by now. I've worked on a bunch of things and yes, we'll talk about those today.
[0:02:15] Danny Allan: I'm interested how you got into AI, Berkay. Was it something you were always interested in even before you went to school or was it something as you were going to school that, "Hey, this is fascinating. This is where the world is going"?
[0:02:28] Berkay Berabi: No. Actually, it hasn't been always the case. Actually, I studied electrical engineering in the bachelors. I was always into engineering, maths, and physics, but actually not really computer science. So, I actually learned programming for the first time when I was 20. But yes, I did electrical engineering, then deciding on the masters, I actually realised, I am not really that interested in electrical engineering. Then, I realised I'm actually more interested into programming, and this AI hype was also, was about to start back then. Then I said, "Okay, maybe I should switch to computer science." Then in the masters, I applied to a computer science program, and then, I did my masters in AI.
[0:03:07] Danny Allan: Wow. That is the perfect place to do it, because I know that you said that AI is hype, and it is true. We hear an awful lot about AI and machine learning, but it is changing the world and especially the software world. Everyone's role, I always say, if I could use AI to take control of my calendar and make myself more productive, I would. But I know even here at Snyk, we're using now machine learning and chatbots to make products more productive, everyone within the organisation. So, it's a very big thing.
I want to talk today specifically about CodeReduce. This is this project that you've been working on for a while now. It has to do with getting fixes. So maybe we can start with understanding what it is that we're actually fixing, vulnerability. So, let's start by just sharing with the audience a little bit about the discovery of vulnerabilities and how that takes place. Because if I'm not mistaken, we use machine learning for that as well. Is that correct?
[0:04:02] Berkay Berabi: Yes, that's correct.
[0:04:03] Danny Allan: Maybe you can just share how we discover vulnerabilities and how we use machine learning to do that.
[0:04:09] Berkay Berabi: Yes, sure. Just maybe as a concept, what's the vulnerability? Basically, it's very easy to write an application, but then, the security complications arise, because for even the most basic thing, you actually need an input from the user, right? You actually use that input from the user, and then you do some computations in the backend and this can basically bring a lot of complications. For example, if you search for someone in a web application, like in Facebook, I don't know if you type a name, and that name actually goes into the database.
I could also write, for example, some SQL query in there that will be malicious and then that will be executed. So, all that kinds of concerns are basically not valid when you are prototyping, developing something. But as long as you have a product, of course, that's the one of the most important things. What we do, our software, Snyk Code analyser, we basically develop the static analyser that can analyse your code, analyse the code of the customers, and then can report all these issues where an attacker or a malicious user could actually do some harm.
[0:05:16] Danny Allan: We tend to think of inputs as filling out data in a form, but it could be a query string, it could be a cookie, it could be an HTTP header. There's lots of ways to get that input into the application, isn't there?
[0:05:26] Berkay Berabi: Yes, exactly. Definitely.
[0:05:29] Danny Allan: Yes. If that is malicious and something bad happens, that is not good. So, how do we go about looking for that type of malicious activity, or when things aren't correctly validated? How does that work?
[0:05:43] Berkay Berabi: Of course, there are lots of enterprises, you have lots of code. This is basically not something that you could basically, I don't know, monitor manually, and all that stuff. Of course, when something is down, I don't know, maybe an attacker can manage to drop your database, then you realise it. But at that point, it's also too late. Because probably, they also have extracted all the information. That's, why we have this Snyk Code, and it can actually do all this analysing for you. Then, you can go over the vulnerabilities, but it is still a manual process to understand the vulnerabilities and actually fix them, unless you are using AI also for this.
[0:06:21] Danny Allan: How do we identify the vulnerability? Let's start with that because that's the first step. You need to first identify the vulnerability and then we want to – well, we'll get into it, reduce the code. But how do we identify the vulnerability itself?
[0:06:34] Berkay Berabi: Yes. For that, we use a hybrid approach. We call this also a hybrid AI. What happens there? Basically, of course, we have some logical rules that basically define certain vulnerable patterns in the code, so in the programs. Things like, okay, this is a user input, now, it goes through these functions. Actually, none of these functions apply proper checks or proper sanitisation. Then, at the end, it goes into this vulnerable or dangerous function where it is used without being sanitised.
These patterns are basically written by the security analysts, but of course, this is a very complex thing because you have to write those patterns for every vulnerability for every different kind of case. This is a good approach, but it's not scalable. That's why we also use on top of this machine learning to make this approach more scalable and more accurate. So, for example, we use machine learning to analyse whether something is really dangerous, whether it is really a user input, or whether it's a false positive or not. So, we basically combine this logic with the AI to improve the accuracy and also to improve the vulnerable patterns that we detect.
[0:07:45] Danny Allan: My understanding is that that is, it's really symbolic regression analysis. We're translating the code into data flows and then understanding the data flow from the source through to the sink. Is that a correct understanding?
[0:07:59] Berkay Berabi: Yes, definitely. This whole approach, of course, it does not work on the text itself, so it doesn't work on the text of the code. But we first basically analyse the code and pass it into an abstract syntax tree, or basically any kind of abstract tree, where we also do some further analysis. On top of this tree, basically, we can run this analysis on the tree.
[0:08:23] Danny Allan: Okay. That's a good understanding of how we can determine whether there's a vulnerability. But let's talk specifically now about CodeReduce because that data flow can be very complex and very lengthy. An input coming from a user might go through 20 or 30 steps before it gets to a dangerous function. That's problematic, is my understanding. Why is
that problematic, the length of that data flow?
[0:08:48] Berkay Berabi: Yes. It's problematic for various reasons. The first one is that, basically, at the end, we want to use machine learning and AI. Of course, the longer the context is, the harder it is for the model to understand. This has always been a problem in the programming languages, because, for example, if we can compare it to the natural languages, there, it is not the case. When you talk about it, talk about something, usually, all the things that are close, they are related, but this doesn't have to be the case for the programming languages. You might have some function, maybe they are next to each other, but they are not related or they are actually being called maybe a thousand lines later.
The model has to basically understand all these long-range dependencies. Of course, this can also happen across different files. It doesn't have to be all in the same file. The model has to understand all these long dependencies. That's the first and the biggest problem. The second problem is actually the feasibility of actually running this as a product because when you have all this information, the model has to process all this information, which takes a lot of time. Very likely, the model will then also output something again, very, very long, which also takes time and money. So, this is also problematic both in terms of the user experience and also the costs.
[0:10:02] Danny Allan: One of the things that your team discovered, and maybe it was you specifically, but the group that you were working on, is that reducing that tree aids in the accuracy of the analysis. Is that right?
[0:10:15] Berkay Berabi: Yes. First, it was the whole team, not only me, of course. Well, yes. So, what we are doing is basically the following. We are analysing the code, and we are finding some vulnerabilities. But while we are doing this analysis, we actually also keep track of the derivation. The derivation in that context basically means why we think something is vulnerable. We not only say to the users, "Hey, this is vulnerable," but we also show them basically the flow that we extracted and why we think it's this is vulnerable. So, we thought basically, okay, since we know this derivation and we can also basically query the analyser multiple times after maybe we change something in the code. What we can actually do is, we can maybe try to filter out the irrelevant parts in the code.
After we do it, maybe we can ask the analyser again, "Hey, do you still find the vulnerability?" If yes, okay, maybe those things that we just removed, they were indeed irrelevant. Otherwise, you can always put them back and try again. So, we use this derivation and all the data flow information to basically create a compact derivation that is still vulnerable and this can be still detected by the analyser, but the context is just much, much, much smaller and focused on the vulnerability.
[0:11:29] Danny Allan: What would be an example of something that was irrelevant to the data flow? I'm dumb, put it in practical means for me. What type of code might you throw away when you're going through that derivation?
[0:11:41] Berkay Berabi: Yes, sure. We can say, for example, maybe I have a file which is written for the purposes of interacting with the database. Of course, there are lots of things, such as maybe searching for a user, creating a new user, adding a new user, deleting an existing user. Let's say, the vulnerability actually happens only when a user is being added, a new user is created. Obviously, we only need that function, and maybe some other metadata in the file, obviously. But all the other contexts or the flow, such as deleting a user, or querying a user, they are in that context not related.
So, neither I as a human when I am fixing it nor the model when trying to fix it actually has to know about the other interactions with the database. We just care about this particular vulnerability and the other basically related context.
[0:12:33] Danny Allan: Yes. So, it strips it out. Have we done any analysis on how much of the code on average is stripped out? I know that it's always and it depends and it depends on the code type, but is it reducing it by 50%, by 80%, by 20%? What level of optimisation is this giving you by reducing the code?
[0:12:51] Berkay Berabi: Yes. That's a great question. This, of course, depends highly on the type of the vulnerability. But yes, we have done this analysis. So, we usually categorise the rules into different categories.
For example, we have AST rules. They are basically only matched by using the abstract syntax tree, but not very like complex data flow analysis. For example, there, it is actually, the compression rate is much, much higher because they are more local issues. For example, there, it was around 50x. The call code was reduced 50 times. If you go to more complex cases, such as taint flow analysis or the file-wide inter-procedure analysis, then it is around mostly 20x. So, still a lot, but of course, compared to the ASTs, it's a bit less.
[0:13:38] Danny Allan: Wow. Well, that's super impressive, 50x. Even 20x is impressive. You go from that, you derive it down to just the part of the code that is vulnerable, and then what's the next step?
[0:13:49] Berkay Berabi: The next step is of course to create an AI model that can basically learn on this data, on this data of reduced code. So, of course, one thing that you need for this is – basically, to train this model, you need both the vulnerable and also the fixed version. We also reduce the fixed version, and we basically provide the model pairs of vulnerable fixed code versions, and they are both reduced. The model learns on this.
Then, after that, we can take the model and during the inference time, basically, when the customer or user requests are fixed, we again do the same procedure. We reduce the code, we give it to the model, and since the code is reduced, the model can actually understand it in a much better way, and it doesn't have to learn all those complex long-range dependencies. Then the model also generates a fix. But of course, this fix is also reduced. The model actually only generates a reduced fix. Then, we have some other algorithms in the backend that we used to merge this reduced fix code into the original file so that we can provide end-to-end fixing.
[0:14:51] Danny Allan: It dehydrates it and then it rehydrates it to put it back in. [0:14:56] Berkay Berabi: Yes.
[0:14:57] Danny Allan: Well, that's fascinating. It's clear that we're entering an area where AI is generating the fixes and making these things far easier. Do you worry about hallucinations? I know when people talk about AI and ML that it might generate, well, I have two questions. One is, do you worry about hallucinations? Also, how do you guarantee the accuracy of that fix that you're generating for the end user?
[0:15:22] Berkay Berabi: Yes. Okay. Let's start with the easy one, the first one. Yes, of course, I'm also worried about the hallucinations and I think they are just the nature of how these models are trained. We talk about intelligence, but I rather like to say it as a pattern replication. When you do something statistically, it just actually tries to generate the text, which is most likely so to say based on what it has seen. Of course, since it's just statistics, occasionally, there is also some bad words, bad tokens, undesired effects. I think this is just the nature of how they are trained and it will keep happening as long as we do it the same way.
I guess we just have to ensure, come up with other ways such as post-processing, validating the outputs, and so on. To basically, at least reduce the effects or impacts of these hallucinations. This is also what we do actually about the second question. If you just query a model, and tell, "Hey, generate me a fix." Okay, maybe let's say 60%, 70 % of the time, it will generate a good one. But what about the other times? It will probably break your code. Maybe it will make you think the vulnerability is actually gone, but actually in reality, it is not the proper fix. It's a partial fix. All that can happen.
What we do is the following. We don't trust the model blindly. We collect the predictions from the model. In fact, we query the model several times, and we collect several predictions from the model so that we can have several candidates and increase the accuracy. After that, for each of these predictions, we actually rerun our Snyk Code analyser again. We basically re-analyse the predictions, and then we detect whether the initial vulnerability is now still reported or not.
Of course, on top of that, we have also a bunch of other checks, such as the code has to be parsable, syntactically correct, and all these things. We make sure that the vulnerability that we were initially trying to fix, it is now not reported in the code, and it's also hopefully fixed in the correct way.
[0:17:28] Danny Allan: That sounds very slow, that it's going through the process multiple times or is this taking minutes, or hours, or days? How long does it take to through that process?
[0:17:37] Berkay Berabi: Yes. On average, it takes 15 seconds right now in production, we can say. But of course, it depends on a lot of factors. So, for example, if the initial file size is very, very long, sometimes we get files with like, I don't know, 10,000 lines. Then, of course the code reduction is actually very, very important, but also it takes a lot of time, because it has to reduce all that code and understand it.
Then, once we have this file, we also have to re-analyse it several times, but our analysis times are usually very, very good. We have one of the fastest analysers in the industry. So, we also paralyse this. For example, I said, we have to regenerate multiple predictions from the model. So, all these predictions are actually analysed concurrently. So, it's paralysed. Actually, it's same as analysing one file at the end, because we analyse all of them in parallel. But yes, we do all these optimisations.
[0:18:32] Danny Allan: Yes, it's shocking to me. I guess, I was leading the witness a bit because I've used the AI Fix capabilities and I know how fast it is to come back. Does the model matter much? I know that there are a lot of models out there. Last year, I think there was 50 new models introduced and they continue to come out all the time. Does the model matter in the accuracy and speed of both the code reduction and the fixed generation?
[0:18:56] Berkay Berabi: Yes, it actually matters a lot and I think the biggest factor is the model size, basically the number of parameters. The more parameters you have, the more metrics multiplications you need to do and that will basically also take more time. That's the thing. On top of that, there is also so-called model architectures. The size or the number of parameters is one thing, but you can also combine these basically parameters in different ways. This is called the model architecture. Right now, there are not many, many different model architectures, essentially, I think in the base, all of them are kind of similar to each other. What sometimes happens is that we actually use these optimisation libraries that are also developed by other companies and teams.
The thing is, they don't support all kinds of model architectures. You might have a great model, but if its architecture is not supported by these optimisation libraries, then you will either have to serve it without optimisations or with less optimisations, or you have to basically change their model that has an architecture which is supported. This is also important.
[0:20:05] Danny Allan: What were the comparisons like? I know that we tested T5 and StarCoder and different models in this. Were there some models that were faster and better than others?
[0:20:15] Berkay Berabi: Yes, for sure. In terms of accuracy, so for example, T5, it is one of the early models. It was actually, back then, one of the state of the arts. But at the time, we basically tried it, it was actually slightly getting old, because the field also evolves very fast. T5 was actually one of the ones that performed the worst. It was also the smaller one. That's why it was the fastest, maybe, but the accuracy wasn't great.
Then, we have, for example, the sweet spots like StarCoder, for example, it comes in various sizes from one billion to seven billion. So, one billion, three billion, and seven billion. We experimented with both of those. We realised that, for example, three billion is kind of the sweet spot, because one billion was not performing very well. Three billion was performing pretty well, and it was also fast. Seven billion was maybe slightly better than the three billion version. But for that improvement, it was considerably slower.
For example, their trade-offs are also player role. Then, we also tried Mixtral, which was a Mixture of Experts back then. It was a huge model. It actually performed very well, but it has around 45 billion parameters. It was not possible to service, although it had the best accuracy on the scores.
[0:21:31] Danny Allan: Yes, it's always a trade-off. I know bigger is not always better. Bigger might be more accurate, but time also matters in this, and the speed at which you can do the analysis. Do you expect the models to continue to improve at the same rate? If we look forward five years from now, will bigger models be faster and better? Or, do you think we are about where it's going to be?
[0:21:56] Berkay Berabi: I think, at least initially, there will be a trend towards the smaller models. Because, during the research, basically, the simple goals was to get better. No one cared about the usability, the user experience, it was mostly in the research. Of course, if you could take a bigger model, and if you could improve those scores maybe by 1%, this was a great publication. But now, it is really starting to get in our lives, and the feasibility, the costs, they start to really matter.
I think there will be a trend towards the smaller models. Actually, I think, currently, it's also happening. Right now, mostly the 80 billion is kind of the sweet spot right now. Most models are basically also published with that size, which basically can still do lots of tasks with a great accuracy, but also compared to the others, it's still solvable as a product in the real world. I think, initially, there will be this trend towards the smaller models, but later again, the hardware improvements and all that stuff. If they improve, we can always try, of course. Now, actually, we can also run maybe a 16 billion model at the speed of eight billion. At the end, maybe that can again be some tendency towards the bigger models.
[0:23:13] Danny Allan: Where do you do your testing? Do you have A100s and H100s at home? Where are you doing your testing?
[0:23:20] Berkay Berabi: I wish I had one H100 at home, that will change everything. No, unfortunately, I don't have any GPUs at home. That would be good. It would also reduce my heating costs, probably. But no, we use cloud and we use cloud GPUs and we do the testing there. But luckily there, we have 800s, so they are pretty fast. For the serving, we have sometimes, depending on the cluster, we have either 800 or slightly a verse GPU, basically, to save some costs.
[0:23:52] Danny Allan: Well, it's funny you say that, because I often wish I had similar hardware at home. But unfortunately, I don't. It would heat my house as well, I think. The power consumption of the new hardware driving the ML and AI is very impressive. Where do you think this is going? I know that we're using machine learning and generative AI for the discovery, for the reduction of the code, for the fix of the code. Do you see it being used in other areas of code? For example, do you see generative AI actually creating the code itself? I it does, is it going to be generating secure code? Where is the industry going?
[0:24:26] Berkay Berabi: Yes. I think the application will increase. So, I don't know, maybe not everyone will use it to write code, but at least as an assistant, it will be used more and more. Writing code is a bit, I think it's a dangerous topic. I don't know. For example, from my experience, if I ask it to do something, it's usually in the right direction, but it cannot be used as it is. This is of course, I'm saying this while I'm putting the security concerns aside, only from a functionality and correctness point of view, even there. Of course, if you want to write a function that detects whether a number is prime or not, this is easy. You can take it as it is. But once you start, I should be doing more complex tasks. I don't think it's yet there.
Regarding the security, we have actually benchmarked this. One other engineer in our team did this, and security is not even a concern for the generative models right now. Most of the time, the generated code is very simple, and just basically focuses on the essentials, but without the security concerns. We found a lot of vulnerable patterns, vulnerable code. Yes, if you use that code as it is, you will have lots of problems.
[0:25:41] Danny Allan: Yes, I saw that. Actually, one of the things that was interesting is the newer models were generating less secure code. In fact, Gartner just came out with Magic Quadrant for coding assistance. There were two statements that were interesting. One is they said, by 2027, 90% of engineers will use a coding assistant, which was interesting. But it said, 25% of software vulnerabilities would be created by that AI-generated code. That's kind of scary.
It's good I guess that we're thinking about how to use AI for the guardrails and securing that code, which is what you're working on. What gets you most excited? When you think about the future of AI and ML, I wish I knew half as much as you about this world. But what makes you most excited about where we're going and what the future holds?
[0:26:27] Berkay Berabi: I think the most exciting thing is that it will basically allow us to do maybe more complex things because it will automate rather the simplest, simple tasks. Now, we as the human, we can actually focus on the more valuable things, and we can actually also build these ideas and all these products actually much, much faster.
I think the efficiency and automating the simple and the burden things, I think the biggest impact. This is I think the general. The personal one is, of course, I am very excited about AI Fix being used by our customers and by developers and I think, I hope this will also increase. I'm also excited for this. As you said, AI will be used to generate code, but other AI models will be also used to basically fix those codes. So, we are kind of basically going to a world where AI models kind of actually interact and do the job among themselves.
[0:27:28] Danny Allan: Yes. I'm interested in a world where the generated code isn't even insecure, that we use AI even before the code gets generated, but that's probably a topic for another day. If you could take AI and automate one part of your job to make you more productive, what would you use AI to do?
[0:27:47] Berkay Berabi: Okay, that's a good question. I would use my right to help actually my teammates, and that's not my day-to-day job, but we have lots of customer tickets. Basically, requesting some features, or maybe like asking questions, and all that stuff, or maybe installing Snyk, and all these things. I think they take some significant time for us. Of course, it's very valuable work. It has to be done. But I think this is also something AI can help. Then, we can also focus on, again, doing basically doing more features and better analysis, faster fixing, and all that stuff. I think I would automate this part.
[0:28:26] Danny Allan: Yes, that's fantastic. AI, I look at it as an assistant to make us more productive, more efficient to focus on the things that we really want to focus on. Well, Berkay, it was great to have you on and to explain to our audience how CodeReduce works.
Congratulations, by the way, to you and your team. It's a fantastic step forward in the industry. It truly highlights how Snyk is a thought leader, but even more importantly, we're thinking about how we help our customers meet the ways. Thank you for sharing, and thank you everyone for attending the podcast today, and we will see you next time on the next episode of The Secure Developer.
[0:29:00] Berkay Berabi: Yes. Thank you.
[END OF INTERVIEW]
[0:29:04] Guy Podjarny: Thanks for tuning in to The Secure Developer, brought to you by Snyk. We hope this episode gave you new insights and strategies to help you champion security in your organization. If you like these conversations, please leave us a review on iTunes, Spotify, or wherever you get your podcasts, and share the episode with fellow security leaders who might benefit from our discussions.
We'd love to hear your recommendations for future guests, topics, or any feedback you might have to help us get better. Please contact us by connecting with us on LinkedIn under our Snyk account or by emailing us at thesecuredev@snyk.io. That's it for now. I hope you join us for the next one.