In our latest episode featuring Jeremy Snyder, Founder & CEO of FireTail.io, we explored the evolving cybersecurity landscape and the crucial role of API security in protecting modern enterprises and IoT devices.
Q&A Summary.
Why is API security such a critical concern in today’s cybersecurity landscape?
APIs have become the connective tissue of modern digital infrastructure, yet they often remain misunderstood. Every time you use a mobile app or an IoT device, you're engaging with an API in the background. These interfaces enable systems to talk to each other, which inherently requires openness. But that openness also creates vulnerabilities. APIs are unique because they span several domains: they are built by developers, live on networks, and carry identity and access controls. This multifaceted nature creates a broad and complex attack surface.
At FireTail, we tackle this challenge through end-to-end API security. We intervene at two key stages. First, during the design and development phase, we analyze the code and API specifications using static analysis to ensure that they are secure by design. Then, at the other end of the lifecycle, we monitor API traffic in production, looking for anomalies and malicious behavior. Our platform includes developer-focused libraries for secure coding and a real-time traffic inspection tool for security teams. Together, they create a closed-loop system that addresses both prevention and detection.
Why don't existing cybersecurity solutions like gateways or anomaly detectors sufficiently protect APIs?
Traditional tools like web application firewalls and gateways were not built with the nuances of APIs in mind. The majority of API breaches happen because of improper authorization, not authentication. In simple terms, just because a system knows who you are doesn’t mean it knows what you should be allowed to do.
Let’s say a user can view and edit their own profile on a social platform. Proper authorization should prevent that user from editing someone else’s profile. Yet we’ve seen time and again that APIs lack these fine-grained controls. Attackers exploit these gaps to exfiltrate or manipulate large datasets. Our breach tracker shows clear evidence that legacy security tools consistently miss these attacks. That's why we built a solution that specifically addresses this critical blind spot.
How does FireTail support both developers and security operations teams?
We built FireTail from the ground up with dual personas in mind: the developer and the security operator. Developers get access to code libraries and tools that help them embed security directly into their APIs at build time. This is where we focus on secure-by-design principles.
Security teams, on the other hand, need visibility into what’s happening in production. We provide them with tools to monitor API traffic, discover unknown APIs, and detect anomalies or misuse. What really differentiates us is the communication bridge we build between these two functions. Often, the biggest challenge is not technical but organizational. Our platform helps unify developer and security teams around a shared view of API behavior and risk.
What broader trends are shaping the cybersecurity landscape today?
Over the last five years, we’ve seen a dramatic transformation driven by cloud adoption, open source proliferation, and automation. The barriers to launching cyberattacks have dropped significantly. It now takes just a few cents and an internet connection to scan massive IP ranges for vulnerabilities. The rise of cloud computing means that anyone can spin up infrastructure in minutes, and hackers use this same convenience to launch sophisticated attacks.
Another critical trend is the emergence of cybercrime as an organized business model. It’s no longer just about breaking into a system for bragging rights. Data has become a currency. From ransomware to credential stuffing to regulatory manipulation, bad actors now operate with clear economic incentives and increasingly professional methods.
How has the threat landscape evolved with the introduction of generative AI?
Generative AI has dramatically lowered the barrier to entry for cyberattacks. Attackers can use large language models to interpret API documentation and generate valid requests within seconds. They can also produce massive volumes of malformed requests to exploit injection flaws or launch SSRF attacks. These aren’t hypothetical scenarios. We’re seeing this happen in the wild.
Phishing has also become more convincing and scalable. With generative AI, attackers no longer need advanced language skills or cultural fluency to craft persuasive messages. We’ve entered an era where cyberattacks are faster, cheaper, and more targeted—a troubling shift in the balance between offense and defense.
Are certain types of attacks becoming more prevalent due to automation and AI?
Yes. While traditional vulnerabilities like open S3 buckets are decreasing due to better cloud defaults, we’re seeing a rise in attacks on systems with minimal security protections. AI can now identify weak configurations and automate sophisticated exploitation techniques. Ransomware, in particular, has become more pervasive, not just in scale but also in complexity and cost.
The shotgun approach is also thriving. With less OpEx required, attackers can now cast a much wider net, launching high-volume campaigns across many targets. Even if only a fraction succeed, the ROI remains positive for attackers. This shift in dynamics is pushing defenders to rethink their entire approach to security.
With these rising threats, how should organizations think about cloud vs. on-premise security?
Cloud infrastructure, when configured properly, is inherently more secure than on-premise systems. Major cloud providers offer robust physical and network security. However, the real shift is in the security model. On-prem environments rely heavily on perimeter defenses—firewalls, intrusion detection systems, etc. In contrast, cloud environments require a focus on software-defined and identity-based security.
In the cloud, everything is programmable. You can apply automated, enforceable security policies across your entire stack. This can make your environment exponentially more secure than traditional setups—but only if you understand and adapt to this new paradigm. Unfortunately, many organizations fail in their first cloud migration because they carry over outdated assumptions about how security should work.
What is your outlook for cybersecurity over the next five years?
We’re moving into an era of persistent, intelligent threat actors who operate at scale. As AI continues to evolve, attackers will gain even more capabilities. Defenders will need to lean into automation, intelligence sharing, and software-defined security models. Education and collaboration between development and security functions will be more important than ever.
API security will play a central role in this new landscape. As organizations continue to adopt API-first architectures, the attack surface will only grow. That’s why our mission at FireTail is to give teams the tools to build and maintain secure APIs from design to deployment. The future belongs to those who can secure their digital nervous system—and APIs are right at the heart of it.
How should companies rethink security in the age of edge computing and low-compute devices?
Edge devices present one of the most nuanced security challenges in today’s architecture. Unlike cloud platforms that offer a rich suite of security controls, edge environments are inherently limited in both compute power and software complexity. That means the burden of securing these systems shifts from infrastructure to application design. In many cases, edge services are essentially pieces of software—sometimes embedded in hybrid hardware-software systems like those used in 5G telco infrastructure—but most often, it’s about securing the software stack itself.
Organizations need to start asking: How well-designed is the edge service we're deploying? Are we factoring in threat models specifically around the unique constraints of edge deployment? Many companies, particularly those with legacy security mindsets, underestimate the need for proactive, design-level security at the edge. And that’s a mistake, because the edge is often the entry point for broader system compromise.
Does the nature of your adversaries—whether cybercriminals, competitors, or nation-states—change your security strategy?
For about 90% of organizations, the answer is no. A general, baseline cybersecurity strategy is typically sufficient to defend against common threats. But for the top 10%—those who hold particularly sensitive data or intellectual property—understanding the nature of their adversaries becomes crucial. A company like OpenAI, which might not have the operational scale of an Amazon or the volume of health records of a hospital, still represents an extremely valuable target due to its unique IP. These are the organizations that need to model for advanced persistent threats, including state-backed actors.
A useful framework here is to think in terms of attacker maturity. On one end, you have "Joey," the teenage hobbyist probing systems for fun. Then “Joe,” a junior analyst with access to some tooling. The real concern starts with "Joseph," the mid-career nation-state actor, and peaks with "Yosef," the elite, highly resourced hacker supported by state or organized criminal infrastructure. Most organizations should be resilient against Joey and Joe. But Joseph and Yosef require not just technical defenses, but strategic decisions about internal capacity versus outsourcing. The rise of MSSPs (Managed Security Services Providers) and MDRs (Managed Detection and Response) has made it more feasible for even midsize companies to protect themselves at this level—if they have the budget and awareness to do so.
With AI and IoT becoming ubiquitous, how are the threat surfaces changing—and how should companies prepare?
We’re facing a tidal wave of API proliferation. AI adoption is skyrocketing, and for most companies, the practical way to engage with AI is via APIs from major providers like OpenAI, Amazon, or Microsoft. Similarly, in IoT—from connected cars to smart doorbells—communication happens over APIs. The result is that API security has become the backbone of organizational security, whether the end use is a chatbot or a self-driving vehicle.
But here’s the problem: most companies don't even have visibility into their API landscape. We spoke with one major automotive company building autonomous vehicles. When we asked them how many APIs the car was interacting with—both sending and receiving—they laughed. They had no idea. And that’s not uncommon. Development teams are under pressure to ship, while security teams are often left out of the loop. That misalignment creates blind spots, and blind spots are where attackers thrive.
The first step, always, is visibility: understanding what APIs exist, who built them, what they do, and what data they expose. Only then can you assess risks, apply proper authentication, and—in some cases—redesign insecure endpoints entirely.
Are there specific, emerging threat vectors related to AI integrations that security leaders should be aware of?
Yes—two in particular are worth highlighting. First is data poisoning. As companies feed training data into AI models, those data ingestion pipelines become attack surfaces. If an attacker identifies an exposed endpoint accepting training data, they can start feeding in poisoned data to corrupt the model. It might make the model subtly inaccurate, or overtly dangerous. And these endpoints are often discoverable through simple scanning.
The second is unintended API exposure. It’s common for APIs to be built for one purpose—say, accepting incoming data—but to unintentionally expose additional functionality like data extraction. If those APIs don’t have proper access controls or permissions configured, they become ripe for exploitation. Both of these risks—data poisoning and over-permissive APIs—are amplified by the speed at which AI and IoT applications are being built and deployed.
You mentioned that cybersecurity isn't something companies can put off. Why is now different from, say, robotics or other frontier tech investments?
Unlike robotics or automation, where a company can afford to wait and still remain competitive, cybersecurity is a domain where standing still is equivalent to falling behind. That’s because your adversaries are constantly evolving. Criminal organizations, competitors, and nation-states are all investing in next-gen capabilities. You don’t get to choose whether you’re part of this arms race—you already are.
Unfortunately, many companies still treat cybersecurity as a "tax"—something they pay reluctantly and try to minimize. But that’s the wrong mindset. Taxes are things people try to avoid. If you try to avoid paying your cybersecurity "tax," you may get away with it for a while, but eventually someone will find the gap and exploit it. With the ubiquity of cloud scanning tools, automation, open-source recon kits, and now AI-enhanced attack platforms, the cost of neglect is exponentially increasing.
How should companies approach the challenge of legal liability, especially with regulations around security tightening?
This is perhaps one of the most anxiety-inducing aspects of modern cybersecurity leadership. Regulations like the U.S. SEC’s breach disclosure requirements and CISA’s “secure by design” initiative have begun shifting expectations—but they remain frustratingly vague. What constitutes a “material breach”? Is the clock ticking from the moment of the breach, or from when you discover it? There’s no case law to clarify these points yet, so the legal boundaries are murky.
The result is growing fear and confusion, especially among security leaders at publicly traded companies. Some worry this might become a Sarbanes-Oxley moment for cybersecurity, where executives are expected to sign off on security postures just like they do on financial statements. Until the courts set precedents, organizations are in limbo—and that’s a dangerous place to be when liability, both corporate and personal, is potentially on the line.
In industries like autonomous vehicles, who actually owns the risk? Is it the OEM, the supplier, the fleet owner, or the driver?
Liability in multi-tiered supply chains is a legal maze. In connected vehicles, for instance, the Tier 1 supplier might build the insecure API, the OEM might integrate it, the fleet manager might operate it, and the driver might trigger the failure. In the U.S., regulatory guidance has not yet defined clear lines of responsibility. While the CISA mandates advocate for secure design principles, they stop short of specifying who is accountable when those principles aren’t followed.
In practice, that ambiguity leads to a lot of finger pointing—and risk-averse behaviors. In China, for example, some OEMs have built Level 3 automation capabilities but refuse to activate them, labeling the system as “L2.5” in order to avoid taking legal responsibility for crashes. Until we have clearer legal precedent and regulatory enforcement, companies will continue operating in this grey zone, balancing innovation against risk aversion.
Looking forward, what mindset shifts should organizations embrace to secure their digital future?
The most important shift is from reactive to proactive. Security should not be an afterthought layered on top of development cycles. It must be embedded from the beginning—especially in API design and data governance. Visibility, again, is the cornerstone. Without knowing what’s in your stack, you can’t defend it.
Second, recognize your limits. Not every organization needs to build elite, in-house security teams. Many would be better served by outsourcing to qualified MDR or MSSP providers. The key is not to build everything yourself, but to ensure you’ve assessed your risks and have the right defenses in place—whether internal or external.
Finally, stop thinking of cybersecurity as a cost center. It’s a strategic investment. In a world of escalating digital threats, it’s also a prerequisite for business continuity.
Transcript.
Erik: Jeremy, you're very experienced. So you're running the Modern Cyber Podcast as well, with a focus on CISOs. Tell me a little bit about that podcast.
Jeremy: We do focus on CISOs, but we actually talk to a lot of practitioners as well. We try to cover a very broad range of cybersecurity topics. And that's kind of by design. Cybersecurity is a super broad topic. Everything from antivirus, to phishing, to spam detection and prevention and making sure you eliminate bad links, up to things like identity in the cloud and API security and security for AI. So there's so much happening in the space of cybersecurity that we really try to talk to a very broad range of people, get a little bit of everything — a little bit of the C-level, a little bit of the day-to-day practitioner, a little bit of the on-premise, a little bit of the cloud and, really, something for everybody who's interested in the space of cybersecurity. We've been running it for coming up on a year. It's been a ton of fun. I think, actually, one of the things I would say out of it, I've learned a ton. So whether our audience learns as much is hard for me to say. But I've learned a ton, and I've really enjoyed the experience.
Erik: Yeah, that's always how I think about my podcast as well. It's that for me, this is my ongoing education. And as long as I get something out of each hour, at least I know that it's made one customer happy and hopefully some other listeners as well. So you are in a boom industry, right, protecting the world from cybercrime? And really a topic where, 20 years ago, 10% of companies would be thinking really seriously about this. Like the IT department is kind of doing it. And now because of IoT, you kind of bringing connectivity to the edge because of AI maybe influencing the cost structure of cybercrime, everybody has to take this quite seriously across the organization. I want to get into the topic of what has changed, and then maybe also what will be changed in the next 5 to 10 years. But before we go into that, tell us a bit about what you're doing at FireTail. So you're doing end-to-end API security. I think maybe the critical question is also, why focus on the API? What are you guys doing? What's your proposition?
Jeremy: Yeah, I mean, end-to-end API security is what it's all about. But what does that actually mean? Look. APIs are this kind of mythical thing that a lot of people hear about but they don't really understand what they do. The simplest way to think about it is that APIs are the way that two pieces of software or two systems talk to each other. Every time you pull up your mobile device and you use any app on there, it's actually talking to another service at the back end over an API. Every time there are two IoT systems talking to each other, they're using an API to communicate with each other. So they're this kind of connective tissue of the modern internet that really ties all of these services together. So that's the starting.
But what does that mean from a security perspective? Like, what are the risks and what does end to end mean? Well, the risks are that APIs get abused like any other system that is kind of designed to be open. APIs have to be open in order to communicate with other systems, right? So that's kind of an inherent entry point that attackers and threat actors look at. When we started analyzing APIs in more detail, we understand that they're kind of this funny object, right? They're part of an application, so they're written in code. So developers build them. But then they live on a network, and they're sending network traffic out. But they also have identity because you have to authenticate in order to use an API and then authorize yourself to use it and so on. So they've got all these constructs around it.
So what we looked at from the FireTail perspective is, we said, okay, well, if that's what APIs are, where on that journey, where in the lifecycle of an API can we help companies make them more secure? What we figured out from our perspective is that there is a security inspection that can be done all the way at the design and development phase that uses techniques like static code analysis and spec inspection and things like that. I know some of those terms might be a little bit techy for some people listening, but you could think about it like this. You can look at an API as it's being built and make sure that the design of the API is secure by design. So that's one aspect of what we do. But then you can swing all the way over to the other side and just look at what's happening with the API while it's out in the real world in production. That's kind of traffic monitoring and an analysis. We do that as well. We look for anomalies. We can also look for specific malicious traffic patterns in the utilization.
And so that's what we're doing at FireTail. It's all software-driven. It is a software platform that combines elements of AI in our own utilization, in our own data analysis. But it also does some things where we have code libraries that can help developers build and launch secure APIs. So that's what we're doing. We've been at it for a couple of years now. We're working with a number of companies around the world in spaces like IoT, financial services. Honestly, a lot of modern software is being written with APIs, first and foremost. And so we're working with companies like that as well. And yeah, that's a little bit about us.
Erik: So that must mean that your customers are developers, that your customers are not the operators or the users of the technology.
Jeremy: So we really have two parts of the product. One is designed for the developer, and one is designed for the operator in the info security team. I mentioned kind of the monitoring what's happening in the real world in production. That is very much designed for a security team. On that side, when organizations come to us and they say, "Hey, we're looking to tackle our API security," that is one of the first questions. We figure out what's your role within the organization, kind of which side of the house do you sit on. If you're on the security side, we're going to talk to you about the traffic analysis about the API discovery functions, et cetera. If you're a developer, we're going to talk to you about the code libraries and secure by design and how we can help you down that journey. By the way, we try in the software to really enable communication between those two teams, which often is actually an organizational challenge more than it is a technical challenge. It's getting those two groups to talk to each other and to understand what the other side is seeing.
Erik: Interesting. Then from the operations side, I guess they'll look at it and say, "Well, I'm already deploying three cybersecurity solutions. I've got my gateways covered, and I'm using anomaly detection. Why do I need a fourth?" So why don't those existing systems cover APIs efficiently?
Jeremy: Yeah, that is because the nature of the API thread is fundamentally different. We've actually done research on this for the last, well, since we've come into existence as a company, two and a half years ago or whatever it is at this point. We've actually been tracking all the API breaches over time. You can find that research on our website. If you just look in the footer of our website, you'll find an API data breach tracker. You'll find our state of API security report. When we analyzed all of the breach events, one by one, you can look through them and you can see, oh, okay, well, this is how the threat actor abused this API. Yep, and here's why anomaly detection didn't catch it. And here's why the gateway didn't catch it. And here's why the web application firewall didn't catch it. It really comes down to the fact that the number one cause of API abuse is lack of proper authorization controls.
Authorization is not the same as authentication. Authentication means I established that you are Erik. Authorization means that I know what data Erik has access to and what Erik can do with that data. And what you can do is often also as important as the data you have access to. So think about it like this. Maybe there is a social media app, and Erik can view Erik's profile. Erik can edit Erik's profile. Erik can view Jeremy's profile, but Erik cannot edit Jeremy's profile. What we see in the case of APIs again and again is that that level, that fine-grained permissions checking, is not there. And so it allows bad actors to actually register themselves on these platforms, get access to the API, and then extract and manipulate large quantities of data by exploiting that permissions gap.
Erik: Got you. Okay. Great. Yeah, I just wanted to cover those points and make sure I understood the business properly. But you were just getting into what has changed over, let's say, the past five years in terms of enterprise connectivity.
Jeremy: Well, past five years has been a really interesting time, right? Because the past five years, we're getting into right before COVID and then kind of going through the COVID transformation. I'll back up a little bit first, and I'll talk about kind of what I've observed in my 27-ish years or whatever it is at this point. Look. When I started, to the point you raised earlier, 10% of companies were worried about cybersecurity. Everybody else, security was just part of what the IT team did, right? That included myself in the companies that I was at at the time. Well, what's changed kind of at a macro level over time is that cloud has come into being. Open Source has grown like crazy, and automation has grown like crazy. What that means is that instead of 20 years ago where only those top 10% of organizations would have been targeted, nowadays, everybody is a target. It costs pennies to set up a virtual machine on a cloud platform, load some open source penetration testing and scanning software and then just start scanning the internet or scanning any particular IP range. And so now everybody is a target. And I always tell people, hackers have automation too. Hackers have credit cards, or they have stolen money, and they have access to the cloud. They can set this stuff up. And we see this in our own labs. Anything we put online is scanned within about five minutes.
So you asked me about like kind of what's changed five years to now. Five years ago, I was working on cloud security and we saw the same thing. But maybe the delay time was like 10 to 15 minutes. Nowadays, anything new we put online, it's within five minutes, it's getting scanned. Not just a one-time scan from a bot. Now it's much more intelligent scanning techniques that we're seeing. It's a, oh, okay, something lives here. Here's some follow-up calls to try to discover what lives here. Well, what third-party libraries are you using? Let me send specific traffic patterns to see if you're running WordPress, or Drupal, or whatever, and figure out what version of it is and see what common exploits we know about against that software stack. So that is something that has really gotten much more sophisticated and much more real time in the last five years. So that's something I would say to any organization out there.
The other thing that I would say is, we've seen the rise of kind of the, frankly, the cyber crime business models. So for things like extracting data, extorting ransom, exposing, dumping data, and even filing SEC reports against organizations that have been breached to bring them into regulatory compliance issues, right? So that has really accelerated in the past five years, this kind of ecosystem. Criminal ecosystem, if you will. The other thing that I would say is, data has become more valuable. So many organizations have gone through a — whether it was pandemic-fueled or not, but they've gone through a digital transformation. And they've become software/digital companies, digital/data companies. So they're collecting more data than ever before. That data can now be used in much more places: things like identity theft, things like credential stuffings, things like just dark market marketplaces to sell this data off to whoever wants to pay for it. All of that has really accelerated in the last five years.
Erik: Okay. Let's dive into a few of these points in more detail. Maybe a good starting point would be AI and automation. I mean, we've already had, going back five years, probably longer, significant automation on cyber criminals in the past couple years. Now we have generative AI, which maybe for phishing, in particular, open up a new window of opportunity for very targeted phishing campaigns.
Jeremy: Yep.
Erik: To what extent does that also extend? I mean, maybe it's automating coding and programming and so forth. But what impact does that have on other types of cyber crime?
Jeremy: Well, you certainly hit one of the nails on the head, which is automated programming, right? So nowadays, you can think about — I'll use the example of APIs because that's what we deal with, and that's what I'm most familiar with. An API, typically, will have some documentation around it that describes it. It'll say, "Hey, there is this API. Here's how you use it, et cetera." You can take that, give it to an LLM, and tell it to generate valid or properly-formatted API requests, and you can have those in a matter of seconds. Then you can tell it to generate millions of requests for that API. You can tell it to use a dictionary of improperly-formatted requests and send those instead in order to try an injection attack, or cross-site scripting, or something called server-side request forwarding (SSRF). All of these things, they're doable in a matter of minutes nowadays because of the rise and frankly how good generative AI is at writing code. All of the LLMs are trained on large-scale, open source code bases. So you can just scan GitHub, for instance, and the open source code library is there and use that as a training model for your LLM. Now your LLM knows what good code looks like. And it can generate it so much faster than you or I can and, really, without having to learn, or study, or read the documentation because it can do that for you in a matter of seconds. So that's one thing that has really, really accelerated.
Erik: Okay. So basically, in the past, cyber criminals had to put some effort. They had an OpEx, right? They had to actually put some cost into this, and that mean they had to be quite targeted in terms of how they spent their time. And now they can just take a shotgun approach. What does that mean if you look at the proportion, maybe the ratios of where those efforts are going into? Are you seeing certain types of attacks explode because AI makes them significantly more viable, whereas other ones maybe, for whatever reason, there's still some kind of OpEx bottleneck that makes them less viable at scale?
Jeremy: Well, definitely, phishing attacks are on the rise. Maybe on the rise is not quite the right way to think about it, but they've not gone down any. And if you think about kind of just the spray and pray approach, as you mentioned, this kind of shotgun approach, just the raw numbers are going up around them, right? So more and more organizations are getting phishing emails. As opposed to, again, in the past, where the OpEx was higher, now it is much, much lower. And the quality of the phishing emails is going up a lot, right? So that's one area where I would say it's gotten worse, if not just maintained.
The other types of attacks that we see kind of going up right now are kind of novel attacks. What I mean by that is, in the past, let's say, in the last 10 years, there was a lot of just kind of like automation-driven scanning, looking for low hanging fruit, which was usually the form of things that were just kind of accidentally publicly exposed. I don't know if you tracked much. But let's say, like, 9, 10 years ago, there was a very common problem of the open S3 bucket on Amazon Web Services. What it is is it's just, basically, think of it as like a file folder in the cloud. A lot of times, companies would either intentionally or unintentionally leave them wide open. And so anybody who happened to discover them could just go and pick up that data. Very low effort, other than scanning. So the scanning was the OpEx. The scanning was the effort to find these things.
Nowadays, what we see is that that has actually decreased because the cloud providers have gotten better at applying secure default controls against those infrastructure resources. But on the other hand, things that have minimal security are getting attacked more frequently. Because it's very easy to tell an AI, "Oh, well, if you find a thing of type X, please try Y," and it might just unlock the door for you. So things with like a simple one factor or things with like a simple 'just use a simple password to gain access to a set of files' or something like that, that's going up as well. Then, of course, ransomware has been — the scale of ransomware attacks has gotten much, much more, I would say, both sophisticated and costly.
Erik: Yeah, I'm curious in your perspective on this kind of cloud. You know, I'm here in Shanghai, as I mentioned earlier. And in China, people like on-premise. They just don't trust moving their data to the cloud. Then you look at that and you say, well, I would assume that Microsoft is going to be actually much better at securing your data than your IT team of seven people. I guess moving the data to the cloud out of the factory, okay, maybe there's a vulnerability in the movement there. But if you look at that situation, you say, okay, when does it make sense to migrate data to the cloud? When does it make sense to try to keep it on-prem? Then I guess you can look at that from an on-prem server situation, and then you have your whole IoT device which is a bit of a separate thing that has to be kind of on the ground. But how do you analyze those different scenarios?
Jeremy: I've been a big believer in the cloud. As somebody who, full disclosure, worked at one of the big three cloud providers for some time period and has spent a lot of time in the cloud ecosystem, I kind of have the same opinion issue. Which is that, generally speaking, the cloud providers are going to be much better at physical data center security than you are going to be. And they're going to be better at default network security than you're going to be on your own.
The difference, though, is that when it comes to moving to the cloud, you actually have to think about the security model differently. If you think about on-prem, you're going to have kind of two layers of security. You're going to have kind of your physical security so that who has access to the server room, who has access to the data center? You can think about access cards and biometrics and key codes and all kinds of stuff that you can put on that side of it. But then you're actually left with a network. What I see time and again from companies that stay on-prem is that network is where they tend to apply all of their security efforts. They're just like, "Hey, I own this data center. As long as I keep bad actors out using firewalls, intrusion detection systems, what have you, I'm secure." That is how most organizations operate in on-prem. Don't get me wrong. There are some that are more sophisticated, and they'll actually do things like apply real zero trust models and have good user identity security and whatnot. But they're in the minority.
What's different though when you shift to the cloud is that that concept of the network perimeter kind of goes away. There's an example that I like to use for people. I know it's going a little bit deep. But on a cloud provider, I can actually take a backup of one single server. And with one little configuration change, I can make that backup publicly available. In order to do that on-prem, it's like five layers deep in the network topology. I'd have to open up whole after whole, after whole through firewalls and network access control lists in order to gain access to it. But my point in giving that example is to illustrate that that concept of network perimeter security is kind of either gone or it's just very different. But what is there is a whole set of software controls and logical controls that can make your cloud environment 10 times more secure than your on-prem.
The problem is, it's apples and oranges, and most organizations don't understand that. So when they make the shift to the cloud, if they only understand the apples model of working, they often get it wrong in their first pass at migrating to the cloud. Where cloud actually makes things much better is that everything is software defined. What that means is that everything can be queried, and everything can be automated. That also applies to your security controls. You can write scripts that automatically apply baseline security to everything in your cloud environment or then tweak and tune advanced security settings across your entire environment. Then you can write software-defined and software-enforced policies that prevent you from stepping out of line and accidentally making mistakes and so on. There's any number of tools providers that can really make that very easy for you and very easy for any human to understand and digest, including, by the way, multiple, multiple open source options that make that free. I stand on the side of, cloud can be much, much more secure, but you need to understand the differences in the security models. And that is where I think the effort goes or the effort is required that is often missed.
Erik: Got you.
(break)
If you're still listening, then you must take technology seriously. I'd like to introduce you to our case study database. Think of it as a roadmap that can help you understand which use cases and technologies are being deployed in the world today. We have catalogued more than 9,000 case studies and are adding 500 per month with detailed tagging by industry and function. Our goal is to help you make better investment decisions that are backed by data. Check it out at the link below, and please share your thoughts. We'd love to hear how we can improve it. Thank you. And back to the show.
(interview)
Erik: And then you have this other element which is edge devices, right, where the data might be migrated to the cloud. It might be migrated to a server, but it's also, to some extent, going to sit on a device somewhere in the world that has relatively little compute. So how do you think about the alternatives for how to secure those devices?
Jeremy: Look. Edge is actually one of the most challenging things in the sense that, usually, it's not a fully-blown cloud platform that has all of the software controls that we just talked about. And to your point, it's also usually a thing that kind of needs to be somewhat open in order to be useful. What that means is that, actually, the burden of security shifts to the design of the service that you're running at the edge. Usually, an edge services is a piece of software. There are may be some that are kind of hybrid hardware software solutions for various specific use cases like, let's say, 5g telecommunications where there's some specialized 5g hardware in play or something like that. But 9 times out of 10, what you do have control over is the software stack in there. And so that really means that what you have to look at from an edge perspective is, how secure is the design of the software service that I'm running in the edge location? That's, again, something that a lot of organizations with more traditional mindsets around security aren't going to fully understand or appreciate. What are the subtleties? What are the specific risks around the design of the software stack there?
Erik: Got you. So let's look at this from a different perspective, which is the perspective of who your adversary is. I guess my impulse is that there's three types, right? There's the cyber criminal who's trying to make money. There's the enterprise who's trying to steal some kind of IP or know how, and then there's the state which might be trying to steal IP. It might be, if it's Korea, North Korea might be trying to make money or might have some other strategic objective. Does it make sense for an organization to put much thought into trying to anticipate who are my potential adversaries and then to take different defensive stances against them? Or, do you basically just take this similar set of strategies and assume that that's going to be the best fit across whatever type there that might be looking in your direction?
Jeremy: Yeah, it's a great question. I'm going to give you two answers. One is that we talked a little while ago about how 20 years ago, only 10% of organizations needed to worry about cybersecurity. I would say this is maybe a case where maybe 10% of organizations need to worry about potential, let's say, enterprise espionage or nation-state attacks. Those 10% are really kind of the top 10%. That doesn't necessarily mean they're the top 10% in terms of size, but they're probably the top 10% in terms of the value of the sensitive data that they hold. That could be intellectual property. That could be things like — certainly, an organization like OpenAI would fall into that top 10%. They don't have the scale of, let's say, an Amazon.com or the valuable data of a healthcare provider. But they've got this core piece of intellectual property that nation states all around the world would love to get their hands on, right? So think about if you find yourself falling into that top 10% where you've either got super valuable data, or you've got super valuable intellectual property, or you've got just the scale of having mass amounts of records, that's where you maybe need to think about protecting yourself from an enterprise attack or from a nation-state attack.
The other one is — this is an analogy I'm going to borrow from a good friend of mine, Anthony Johnson, who actually shared it with me on the Modern Cyber Podcast. So a little bit of a plug there. If you want to get the full detail on Anthony's telling, which is going to be better than mine, check out that episode. He talks about kind of four personas. So you have Joey, Joe, Joseph, and Yosef, right? They are four versions of the same name. So who's Joey? Joey is a 13-year-old kid classically hanging out, playing video games in his mom's basement. He gets access to some open source scripting stuff. He's just poking around for fun. Then there's Joe who is the early 20s version of Joey. He's an early career professional working at an organization. He has access now to a little bit more sophisticated tooling. He starts to have a basic understanding of how organizations work, and he knows a little bit more about specifics of how to attack organizations. Then you have Joseph. Joseph is kind of a 30 something or 40 something mid-career professional with 15 to 20 years experience working for a nation state. Then you have Yosef who is the classic top-end hacker, nation state or criminal underground employed.
As an organization, if you're thinking about defending yourself, you better be ready for Joey. Right? There's no excuse to get puffed by a Joey anymore. And in fact, even a Joe who's got basic understanding, if you're not covering your basics, you're probably an irresponsible organization. Where it gets more kind of nuanced is in those top two personas, the Joseph and the Yosef. If it's the Joseph who's kind of mid-career, not super, super skilled but has a one degree higher level of sophistication than Joe, then you need to ask yourself, what type of organization are we? How serious do we need to be? And by the way, are we capable of running it ourselves, or are we better off outsourcing this to a third-party provider? This is actually one really positive development on the cybersecurity side, is that you have a real rise in what's called managed security services providers (MSSPs) or managed detection and response (MDR) providers. They're very good at what they do. Many of them are very good at what they do. And so if you're worried about Joseph, you have to ask yourself, "Do we feel like we're covered, or do we feel like we should outsource it?" And if you're worried about Yosef, maybe you're never going to be able to defend against Yosef. Or, maybe you're in that top 10% where you really need to worry about it. And in that case, you better have a very healthy budget and a very healthy, well-managed team.
Erik: Got you. Okay. So that Joseph one would be potentially like a hospital, right? I think a lot of hospitals, probably 10 years ago, weren't too worried about it. They had their IT team who's competent. But now people look at them, and it's a nice, fat target. Maybe they're like, I could probably get $10 million out of this hospital if I just shut down some of their critical services. Then I guess at the top level, I suppose, there, it's more like risk mitigation. If they really want to get in, they can probably get in. But it's kind of like, how quickly can you discover them? Can you limit access to certain data and so forth?
Jeremy: In the Yosef's of the world, these are the ones who are funded and have resources behind them. They have time, and they have specific targets, right? These are the ones who are really going after the top, top targets. They're going to go after them again and again and again and again. And so, to your point, if they really, really, really want to get in, they're dedicated, and they've got the funding and the resources behind them to back them, they're going to find a way in. You look at all of the kind of very, very sophisticated attacks, and you can see that they have resources behind them.
Erik: Let's then turn to the topic of what's coming in the future. Because I think this is a bit of a unique technology domain where most technologies, if you look at robotics or whatever, a company can look at it and say, you know, maybe we should invest in robotics. We could get an operational efficiency. But you know what? If we don't invest right now, if we invest in 5 years or 10 years, we're probably not going to go bankrupt, right? We might lose a bit of competitiveness. But it's a decision you can make. But in this area, you have a set of companies that are going to be investing in the next generation, right? The cyber criminals will be investing. And so your hand is kind of forced. At least, you know that the environment is going to change around you. And so, if we look forward, maybe again 5, 10 years into the future, what are the big changes in terms of the environment that you're expecting to see?
Jeremy: Well, I think one of them is exactly what you said, which is that, going forward for the next five years, kind of everybody is forced to make some investment in cybersecurity. But I will say again, for many organizations, the investment that they should make isn't going to be buying a lot of tools or hiring a lot of people. It's actually going to be outsourcing the problem. And being realistic about that and understanding your organization is going to be a positive way for you to make that decision in a rational manner and make the right decision.
I will say one thing on the mindset of organizations. As I think about making that decision, I very often hear about the so-called cybersecurity tax. I really hate that term, and I'll share with you why. Taxes are things that you have to pay but you hate. And you very often feel like your best approach is to try to minimize the amount that you can pay and to even — we've seen organizations again and again use "creative accounting" to get around paying the right amount of tax. They play all these games, and they move things offshore. They look for tax shelters and havens and all of these types of things. You do it because you have to, but you're always trying to cheat the game. That's not going to be a great approach in cybersecurity. Eventually, given the nature of the tools that the adversaries have in terms of cloud and scanning and automation and open source and AI, that will catch up to you. So please don't think of cybersecurity as attacks. All right. So those are just a couple of things. Your hand is going to be forced. You need to do something. The right thing might be to outsource, and please don't think of it as attacks.
But now let's think about the specific threat domains that are coming. Look. I think there's a couple things that are coming. AI is coming and it is coming hard and fast, and I'd say way faster than most people expected. AI is actually a kind of interesting decision point for a lot of organizations. Because now they are faced with, again, a decision that they might have faced over the last five years, which is, let's say, like for the last five years, they might have said, "Hey, are we going to stay on-prem, or are we going to move to the cloud?" Maybe it's the last 10 years, they've been thinking about that. But now they have to think about that same thing with AI. "Are we going to embrace it? Are we going to embrace it now? If we do it, are we going to try to do it in-house, or are we going to work with a third-party provider?" They're going to have to go through that set of decisions again.
I think the right decision for most organizations — there are very few organizations that have the scale, the volume of data, the infrastructure resources and the sophistication to run this in-house. And so the right decision for most organizations is, you have to do it but you're going to want to do it through a third-party provider. You're going to want to do it by partnering with an OpenAI, or with an Amazon, or with a Google, or a Microsoft, or whoever it is that you end up deciding to work with: Anthropic, Cloud, Cohere, whatever. When you go through that decision, you have to figure out how you're going to collaborate with that provider. And 9 times out of 10, it's going to be APIs. It's going to be writing code on your side that communicates with that AI provider and figuring out how you send relevant data to them, how you structure your queries and how you get relevant data back and then utilize it. So that's going to open up a wave of API utilization, both creation and consumption. That is something that I'm really, really concerned about, because most organizations don't understand what the risks are.
Similarly, you brought up robotics. I think it's actually, while specific robots may not be the right solution for many things, any organization that has anything to do with any kind of hardware device, IoT is the way that I think about it, Internet of Things. That thing could be anything from a connected car to a home security camera, to the doorbell that I have at my front door to see who's there. All of those things are also talking over APIs. So we just see this flood of APIs coming over the next several years. The risks around them are poorly understood, and most organizations aren't looking at them yet.
I think there are real, tangible, near-term risks and threats against the success of these programs and their implementations that people should be aware of. Number one is, from the standpoint of AI, one of the main things that you have to do is actually build up models and continue to feed them. There is an attack vector called data poisoning. What that means is, if I find an open access point on your network where you are ingesting data for the purposes of training an AI model, and I find that I can actually feed that data, first of all, it's very easy for me to find that through scanning. Second of all, it's usually very easy for me to understand the structure of what the API expects and what kind of data I can send there. I can send all kinds of data that will just throw off your models and make your AI efforts way less successful. It'll just completely throw your model off, right? Now, that takes time and effort. And if you're monitoring for it, great. But if you're not, that's a real risk. Second is, time and again, we see organizations that create APIs that are designed for one purpose, maybe receiving data, but accidentally expose other functions like extracting data as well. Typically, when we see that, we also see that they don't have good protections or permission systems on them. So that's another very tangible threat that I'm worried about for organizations over the next few years.
Erik: Okay. So I like the way you frame this. I mean, it's very simple. But you've got AI exploding in terms of usage. You also have IoT exploding in terms of the number of connected devices. You just walked us through the logic of some of this that you can take to address AI. Is it a similar set of steps or activities that you would take to defend yourself if you're planning large IoT deployments, or are you going to have different defensive mechanisms there?
Jeremy: No, it's basically the same. And all the time, by the way, it always starts with visibility and understanding what APIs you have and what APIs you're consuming. For many organizations, actually, they don't have that. We recently talked to an automobile manufacturer who will remain nameless. They're working on a software-driven car, basically, like many of the manufacturers are, self-driving vehicle technology. In that, we ask them to describe all of the communications that the vehicle has with the back-end cloud services that they're using to kind of feed it, train it, monitor it, et cetera. We just asked a simple question, "Well, how many APIs are running on either side? Server side, how many APIs is the car talking to? Car side, how many APIs does the car actually have?" The car actually has kind of a server infrastructure of its own that is receiving data from the cloud as well. And they laughed at me. I said, what's funny? They're like, "We have no idea, just like no idea how many APIs are running either side." And I said, well, why? Then we kind of got into it, and it's the same pattern that we see again and again.
You have software developers who are told to make something happen. And very often, the security and the teams responsible for monitoring are out of that loop. The developers are told, get it done. They're very often given unrealistic and super tight deadlines because it's a very competitive space, and it's a race to be the first to really bring this to mass scale globally, right? Because the opportunity is massive. And so, in that regard, you have this kind of misaligned incentives between what the security team is responsible for and what the development team is responsible for. The development team is told get it done. The security team is left to kind of chase behind them and understand what's going on. And so whether it's AI, whether it's IoT, connected cars, what have you, you always have to start with that visibility. And it's the same approach either side. If you have an understanding of what's there, then you can actually start the next step of the process, which is to assess all of those connection points and understand the specific vulnerabilities of each of them. When you have that understanding, then you can look at, what are we going to do? Are we going to try to lock it down with network security controls? Are we going to try to, I don't know, apply a stronger handshake Protocol? Or, do we need to go back to the developers and say, actually, you've got to fix the design of this thing because there are flaws in it that really have to be addressed and can only be addressed on the software side? So you've got kind of multiple options, but it starts with having that kind of visibility picture and assessment of what's out there to know where the risks lie. And that's the same across either side, AI or IoT.
Erik: Got you. Just one final question. Just in case there's any lawyers that made it to the end of the podcast here, I believe I heard — I might be misremembering this, whether it was Congress or an executive order, that there was some legislation that actually kind of put legal liability around cybersecurity. And IoT companies have to have some kind of defense. Certainly, with connected vehicles, my understanding is that, fully autonomous vehicles, there would be an expectation of liability for the OEM. But then you get into this situation where there's, let's say, there's a tier one supplier that's supplying components to the OEM, and they kind of built the API. Then there's the OEM. Then there's maybe Hertz, the rented car which owns the fleet, and then there's the driver. And so where does the liability lie in these circumstances?
Jeremy: It's such a good question, but it is one that is so hard to answer right now. You've got a number of things in play here. So first, you've got a mandate that, at least here in the US, came out from the Cybersecurity and Infrastructure Security Agency (CISA), the Secure by Design mandate. It's about a year old, by the way, right now. The problem is it didn't have any specific kind of consequences to it, nor did it have specific assignments for who actually owned the security or the secured design. Second, you have the SEC which is now requiring material breach event filings and notifications. And so if you look at it from either perspective, you can look at it from the CISA side, which is to say, well, okay, you've got the tier one manufacturer, and then you've got these OEM providers. I don't know who's responsible. It's not clear in the mandate yet. And by the way, there hasn't been any kind of case precedent law to really nail that down just yet.
We're seeing our first prosecutions on cybersecurity liability right now, and all of those are taking way longer than expected and, arguably, by the way, are mistargeted. But that's a separate topic that we don't really have time for today. Then you've got the SEC mandate, which is also super vague. It's, you don't have to tell us about your design. You don't have to tell us about how good you are at, let's say, vulnerability management. But you have to tell us when you've had a material breach. What defines material? And for a while, there was even uncertainty around the time period. I would say there's maybe even still uncertainty around the time period. I think it's 96 hours right now, 4 days from the time of breach. But is it from the time the breach event happened, or is it from the time that you learned about it? Is it from the time that there was, let's say, the first data for sale on the dark web? All of that is still unclear.
So if I'm a lawyer and I'm looking at the space, you kind of have to monitor both sides. You're going to have to monitor the case law in the precedent set by the courts a lot. It's still super unclear in my mind. By the way, I've talked to a lot of people off the record, and they are super scared. I mean, specifically, people in senior security leadership positions particularly with publicly-traded companies, the ones who fall under the SEC regulations. They are confused, and they're scared. They don't know whether they're going to have to be personally liable for things going forward, whether there's going to be — a lot of people talked about some of those filings as the Sarbanes-Oxley moment for cybersecurity. Where in the aftermath of the Enron financial scandal, Sarbanes-Oxley came into account. CEOs and CFOs have to sign off on financial statements now, CEOs and CFOs of publicly-traded companies. Well, is that coming for security professionals as well, that they're going to have to sign off on the security posture or the security programs that their organizations have? Nobody is sure. A ton of uncertainty, a lot of fear, a lot of confusion right now unfortunately.
Erik: Yeah, okay, well, hopefully, we work through this. I know here in China, the rumor is that a lot of the OEMs have kind of L3 automation capability built into their vehicles. But they don't activate it because then they become legally liable when a car is in that state. So they call it like 2.5. Consumers understand that it's L3, but we're not going to call it that with liability. But let's see. This will have to work its way out. Jeremy, this has been, for me, a fascinating conversation. Is there anything that we missed that's important for folks to know?
Jeremy: There's nothing major that we missed. I think we've covered a broad range of things around AI and IoT, and I hope it's been a useful conversation. The only thing I would say, for anybody who's interested in learning more, please do have a read of our State of API Security Report 2024. It's free. You can download it off our website. If you have any questions afterwards, please just reach out to us. We're more than happy to help.
Erik: That's FireTail.io. Jeremy, thanks for joining me on the call.
Jeremy: My pleasure, Erik. Thanks so much for having me.