AI/ML in Cybersecurity: Hype or a Revolution in Capability?

Jonathan Nguyen-Duy

Deputy CISO at Fortinet

Renee Tarun

Field CISO at Fortinet

Jim Richberg

Field CISO at Fortinet

Learning Objectives

Please Join the Field CISOs & Deputy CISO of Fortinet, Jim Richberg, Renee Tarun & Jonathan Nguyen-Duy in this Executive Interview where they will discuss why CISOs should include AI/ML into their arsenals and what you should look for in AI tools.


"When we think about AI and automation, two words come out: proactive and predictive. "

Jonathan Nguyen-Duy

Deputy CISO at Fortinet

Renee Tarun

Field CISO at Fortinet

Jim Richberg

Field CISO at Fortinet

Transcript

Britt Erler

Hello, everyone, and welcome to the CIO VISION Leadership Virtual Summit hosted on Quartz Network. My name is Britt Erler, QN Executive Correspondent. Thank you so much for joining us. In today’s conversation, we will discuss AI machine learning and their impacts on cybersecurity operations. Sharing their insights, we have three fantastic executive speakers joining us today from Fortinet. I will have each of them go around, introduce themselves, and provide a quick background before we get started. Jim, would you like to start?


Jim Richburg

I’m Jim Richburg. I’m the Field CISO at Fortinet, who focuses largely on the public sector. I joined the company a little bit more than two years ago, after 33 years in the federal government, where I was the National Intelligence Manager for Cyber, so basically the person coordinating cyber threat intelligence and trying to orchestrate what we were doing at the national level on a lot of those issues. I help built a couple of the big National Cybersecurity programs for presidents Bush and Obama. I really come at this from the public sector focus, so that’s really my primary passion here at the company, is helping people in the public sector figure out how they want to address their cybersecurity problems.


Britt Erler

Fantastic. Renee, would you like to go next?


Renee Tara

Sure. I’m Renee Tara, and I’m actually the deputy CISO at Fortinet. Prior to joining Fortinet, I spent over 20 plus years with the US federal government. Part of that time was with the US Secret Service, and then the majority of time was at the National Security Agency. My last assignment at NSA was actually serving as a Special Assistant to the Director of NSA for Cyber. I’m doing a cross agency coordination, collaboration anywhere from cyber policy, strategy, and operations at NSA as well as working down with Jim and others in our interagency partners down with the White House.


Britt Erler

Fantastic. Jonathan?


Jonathan Nguyen-Duy

My name is Jonathan Nguyen-Duy. I am the Field CISO for Strategic Services. Came to Fortinet 5 years ago to work on the Fortinet Security Fabric. My background: 25 years in cybersecurity, 16 years at Verizon, where I was part of the Executive Leadership team that ran the MSS practice, both public sector private sector. My last two years there, I was a security CTO. We’re the team that ran the Verizon data breach investigations report, so looking at root cause analysis, over 12,500 data breaches led me just some fundamental decisions about how to organize technology and use things like AI. That’s what drove me to come to Fortinet, was the working on getting the technology right. Thanks for having me here today.


Britt Erler

Thank you for all being here. It’s a pleasure. Really excited to hear insights on this topic. You clearly all have extensive experience and have seen what works and what doesn’t into IT and Cybersecurity space. Let’s go and kick it right off. AI and machine learning have become major key themes in the IT space, especially within the last few years. Where do you come down on the issue of whether AI and machine learning are overhyped as solutions in the cybersecurity space and why?


Jim Richburg

I don’t see it as an issue of they’re overhyped so much as they’re misunderstood. You can always simplify it to say there seems to be an assumption that AI is a powerful tool. AI basically affects everything that’s digital, it therefore must be affecting cybersecurity, and therefore it must be helping. The reality is, my experience is relatively few, certainly at the executive level, really understand exactly how it works in cybersecurity, the breadth of places that it’s being applied, and how the impact is being felt. That’s unfortunate, because I really do think it is a transformational technology that coupled with the platform based approach, these broad ecosystems of capability are a game changer that finally where we’ve got a one two punch that may take the advantage away from the attackers and put it on the side of network defenders. I really wish there wasn’t so much misunderstanding about exactly what AI was doing for us in this field.


Renee Tara

I agree with Jim. I think it’s something that AI machine learning really needs to be in the arsenal of today’s CISOs. With the speed of digital innovation, it’s really completely transformed how organizations do business. Instant access to critical business tools and information by cloud based applications. It really lets workers access any resources they need from any location on any device. However, that same innovation and trend has also transformed cybercrime. Raising the bar on both speed and the severity of the text that we’re seeing. What’s now successful data breaches, costing on an average over almost $4 million. Part of that challenge that we’re seeing, at which the speed in which the attacks are occurring. That really is complicated by the fact that our security tools, in a lot of cases, for a lot of organizations are simply unable to react in time to prevent serious incidents. Previously, cyber attacks moved at human speed, with manual extra caution required at every step. That manual process once provided a viable chance for our cyber defenders to simply catch that exploit before things cause me major damage. Now, cyber criminals are grappling on digital innovation, they’re able to automate and apply AI to many of their tactics. This has enabled them to quickly create more sophisticated, multi vector attacks that can be carried out at machine speeds. For example, cyber criminals are now leveraging AI automation to actively located exploit multiple vulnerabilities simultaneously while even invading detection. Automation also makes those attacks more much more prolific and causing more damage, so CISOs find themselves not constantly searching for new tools to add to their arsenal, and often, only to find that cyber criminals have dealt with even more advanced ways to circumvent those security controls in place. Traditional security approaches, from my perspective, really needs to be complemented with alternative models such as AI and automation. These really provide advantages to CISOs to not only be able to mitigate the risks brought on by automated cyber attacks with faster response times, but also can give them broader visibility and simplified network management. As Jim said, we really need to actually get ahead of the cyber adversaries.


Britt Erler

Jonathan, anything to add?


Jonathan Nguyen-Duy

Great points from my colleagues. I think part of the ability of leveraging AI is really to approach it and learn from the lessons we’ve had over the last 25 years, is that we need to not approach things as operating silos or specific point products, but really as an integrated capability. Renee is absolutely spot on—AI is a great tool that leverages the ability of automation to really begin to enhance the accuracy of detection, as well as accelerate the speed of mitigation. AI, in many ways, has been overhyped. I think the biggest issue about AI today is how’s it going to be applied? Are you thinking about using machine learning? Are you thinking about using deep neural networks or artificial neural networks? Are you thinking about using unsupervised or supervised learning? How are you going to apply it? It is a tool, but the benefit of that tool is how it’s applied. The need right now is to ensure you, one have visibility across the land when data center and cloud edges; second, now that you have visibility, you can leverage AI to have better contextual awareness about what is happening. If that AI is integrated like the way we’ve approached it at Fortinet, with your operations, and the ability to automate your operational elements across networking, and security, as well as monitoring that end user—that digital experience, then you begin to see the promise of AI. AI in itself is a great tool. It becomes a powerful element when it’s complemented by an integrated fabric, so we avoid those mistakes about creating standalone silos and standalone products that were never integrated, and then their utility is greatly diminished, so absolutely.


Jim Richburg

Britt, let me let me elaborate—put a little spin on what Jonathan said, because it’s when you put the strength of AI, especially when it’s the back end of a broad system, coupled with the fabric—this integrated platform approach—it turns what is often considered one of our greatest liabilities or vulnerability, the growing size and complexity of the attack surface into a benefit. Imagine you take that breadth of the attack surface and you instrument and the instrumentation are sensors that are reporting back to a place that’s got AI and machine learning that have become powerful enough that you can make sense of what’s happening on your network in real time. You have now got a barometer or a thermometer to let you know what normal healthy network activity looks like, to know what abnormal network activity looks like, and deep neural network. That kind of machine learning is really good at saying “this is abnormal and bad,” either bad based on what it can do, bad based on its own characteristics. And then, the same devices that are sensors are also controls, so once you’ve decided something is worth acting on, you can send out the command to not only stop it where it’s happening, you can inoculate everybody else in the attack surface against it. Because you’re doing this, in some cases, in sub second, in real time, you’re turning the attack surface into a sensor network. That means, as soon as the intruder pokes at one place while they’re still groping their way to success, you get ahead of you, you divine what they’re doing. It’s as if patient zero in Wuhan was battling an infection, and WHO had said, “Oh, look a novel thing!” and been developing a vaccine even while the first patient was fighting an infection. That’s what I mean by the transformational power and enterprises don’t have to do it, it’s being done with the back end analytics and AI by the OEMs like Fortinet. It’s happening invisibly, and that’s why I say this is transformational.


Renee Tara

I would agree with Jim. It’s almost like it’s flipping the script on the adversaries. The answer has been using speed and scale against us, so now we’re taking that same speed and scale on our internal systems and using that as a strategy. A lot of cases we find organizations are taking a reactive approach. A reactive approach in today’s environment is simply it’s going to cause additional downtime, additional outages, and increasing in costs anytime you have cyber incidents. Speed is essential. That is going to help—what I mean flipping the script, it’s really taking the approach where you’re doing more proactive threat management strategies. Again, cyber criminals are taking advantage of every second they can, they’re evolving techniques and they’re expanding surface, it can really start to overwhelm and outnumber your security teams. By really leveraging these solutions that incorporate machine learning AI, CISOs can proactively tackle some of these automated attacks we’re seeing and to stay ahead.


Jonathan Nguyen-Duy

That’s a great word—the word “proactive”, Renee. When we think about AI and automation, two words come out: proactive and predictive. The ability to leverage data to do more proactive and predictive things, not only in the security side. In terms of vulnerability management, system configuration, and misconfigurations, like the bane of so much of our existence. Also detecting those sophisticated not so sophisticated threats and anomalous behavior, but also leveraging that integration with the networking side, so that you can improve the responsiveness of that network is SD-WAN highly adaptive, moving from MPLS and broadband to 5G, 4G, on demand based on the criticality of that application, that workload, that particular underlying business processes. Aligning that with security and your compute in the cloud, leveraging AI begins to show you how you not only improve security, but you improve the network performance. You also make it a more responsive application performance as well. That leads to those better outcomes and there’s better customer experiences. AI, in conjunction with networking and security, that where we converge at Fortinet is really how you, as Jim says, is so transformational, because now the CIO and the CISO can say, “Hey, not only are we providing network performance and assurance and computing and security, but now we’re enabling new, better ways for customers and stakeholders and partners and employees to interact with our brand.” Creating new business processes, the foundation of digital transformation itself. Then you become from Doctor No to Doctor Knowing, and then you become an enabler of the business. That’s transformational, and that’s why I’m hearing so many CISO say, “Hey, I’m being asked to measure the ROI on digital experience. I’m being asked how does security enable new revenue accelerate time to market?” I think AI begins to unlock a lot of things that we hadn’t even thought about in the past.


Jim Richburg

Jonathan raised the issue of ROI and measurement. From my perspective, having been involved in measuring cyber performance for a long time. That, arguably, is the weakest part of cybersecurity. It’s not the technology, it’s not even the workforce in skills gap, it’s the fact that we can’t reliably, predictably, and transparently measure cause and effect. If you give me this amount of money, here’s how I can affect our security, here’s how I can affect threat against us. Sometimes, we even argue over what is the investment. The people who do close support in your organization are the ones who are doing inventory, they’re doing password management. They’re doing things that arguably affect your cybersecurity. They’re probably not counting against the CISOs budget. I say, “How do you calculate ROI when you don’t even know what the number on the denominator of that equation is?” Now, it’s up to you. Am I missing decimal dust rounding error or am I missing significant things? That, I think, is organization specific, but the point is, measurement is what’s really hard and that was one of the—let’s make lemonade out of lemons. That was one of the things that came out of the pivot to remote telework during COVID that really put IT and security in the spotlight. Most organizations said yes, they’re mission enablers, they’re part of the team, but absent those two things. When we went into lockdown, organizations would have gone into shutdown if the employees had just had to go home and binge watch. The fact that we were able to give them conductivity, and to provide some level of assurance, allowed our organizations to keep their heads above water. We were, to build on Jonathan’s weight, the heroes of the hour in terms of showing networking and security are key. Now, we need to find ways to build on that success coming forward saying we can give you ways to not only get that digital transformation, but security is intrinsic to the solution. I mean, you’re getting an answer, and yes, it’s secure, or else we wouldn’t be proposing this as an option. It’s baked into the DNA.


Renee Tara

I mean, it’s really coming down to—for organizations, the way digital innovation and business are evolving. Jonathan and Jim both hit on it, it’s really focusing on the security and the IT teams working together. That’s really driving for doing things to be able to make better decision making, because, again, one of the biggest challenges a lot of IT leaders face is that monumental task of making critical business decisions on the fly. By leveraging things like AI machine learning, you’re able to quickly more gather, analyze, and prioritize data, that a click of a button to again do enhance threat and incident management process. It helps creates more efficiencies by including a lot of that automation in it, and also reduce the chance of human errors. A lot of cyber incidents days are happening at the hands of very well intentioned, but sometimes overworked humans. Even some of the most skilled IT professionals can sometimes make mistakes. Those mistakes can be very costly. Ultimately, for a lot of organizations now, some are faced with decreasing budgets and limited resources. It really comes down to also want to increase that efficiencies by adding that machine learning AI. Add some of that automation to do some of those streamline workflows, create more uniform and efficient environment. Not only does it make the organization become stronger in terms of security, but also makes them become more cost effective across the board.


Jonathan Nguyen-Duy

If you take a look at one use case, our Fortinet Virtual Security Analyst. In the 52 weeks of development, at week 34, we were able to demonstrate that it was outpacing the average human sock analysts, tier one, tier two, tier three. At week 52, we’re able to demonstrate that only was it outpacing humans, because it’s self learning, it’s inherently going to perpetually perform. Humans are always good, bad, or indifferent, right? We hope they’re good, but the practical reality is that the end of the 52 weeks, we demonstrated it was the average of five average human SOC Analysts. If you take a fully loaded cost of a SOC Analyst anywhere in the major metropolitan areas in the US, it’s about $225-250,000. That one solution that our Fortinet Virtual Security Analysts was equivalent to some 850 to a million dollars in cost savings. That’s a very easy way to demonstrate the types of productivity enhancements that AI can generate for an organization, as Rene said. We simply cannot process the sheer volume, variety, and velocity of data that’s coming at us. I remember SLAs that were in the hours when I first started in the industry. That was that delays went down to minutes, and now we’re 5G. We’re talking about a sub five millisecond SLA. There is no way that you’re going to triage an event, and try to decide how you’re going to mitigate that by swivel chairing across multiple management console—not that I’ve ever done that—but the practical is, that didn’t work 10 years ago, and it’s really not going to work today. We need to use these tools are available, but to do it in an integrated way. You really get better performance and cost savings that way.


Britt Erler

You have all listed extremely compelling arguments for why integrating AI and machine learning is so crucial for an organization. Yet there are still companies that are reluctant to do so, why is that? Are there certain obstacles that we’re not seeing as to why they’re hesitating to do this?


Jim Richburg

Part of it is the education issue, where I started by saying it’s not overhype, it’s misunderstanding. I know I did a lot of procurement when I was in government and I joke, “Procurement comes down usually to a combination of the two P’s: price and performance.” People typically don’t recognize these platforms, these ecosystems that, at this point, in the evolution of cyber technology, are now AI-powered, AI-connected exist. The best data source that I point people to is the Data Breach Prevention Study that was done by an organization called NSS labs in 2019, where at the request of their customers who said, “Look, the OEMs are telling us these integrated technology suites are the greatest thing since sliced bread. Can you test it?” This organization was like consumer reports for cybersecurity—independent rigorous testing. They took products from the leading OEMs, Fortinet and its peers, and they said, “Okay, we will take a test network, we will take a live threat data, and we’ll replay it against all of these networks defended by these different products. We’ll also engineer our own samples to be essentially zero day exploits with as many as 13 layers of obfuscation.” That’s a high bar, because the reality is, nobody is perfect against the threat they’ve never seen before that’s actively trying to hide from them. They found that firewalls were 90+% effective as a class. They varied a bit, but they were in that cluster. They found that endpoint products were as a class a bit more effective. The interesting thing, Britt, is when they allowed the same products that we’re running those two tests to be knit by AI and ML, they rose across the board in effectiveness, and they also became more cost effective as well. That’s the point that I always point executives to to say, “Look, here’s an actual study that says: This approach works.” This is the best antidote to people who want to look at all of their procurements from the magic quadrant perspective, just to say, “I’m going to start by looking at best to breed.” If you do that, you lose sight of the fact that any solution that’s integrated in the platform is going to outperform the best of breed.


Renee Tara

I think that some of it has to do with education, some of it has to do with the cultural change. For some organizations, it’s the fear of loss of control. They feel that—it’s perceived loss of control, when the reality is, the right tool can actually provide greater visibility and enhanced oversight into your cybersecurity processes. Then also, I think there’s some distrust of the technology. Sometimes, you have highly skilled analysts that feel that they are more capable than managing—doing some of the things like incident response than the machine could. Last, I think it’s the fear of change. We see a lot of industries, whether that’s Manufacturing, where you bring in more technology, there’s concern that machines are going to be replacing the human elements. While AI systems do provide a lot of capabilities, it’s simply they can’t operate in fully autonomous mode. In fact, a lot of the AI implementations are being done, such that the AI systems are really providing augmentation, adding additional intelligence role supporting the humans at what they do best, but not fully replacing them. The fact is, I think that a lot of this AI machine learning technologies and capability, it’s certainly gonna change the way people work, but I think it’s also going to be creating more opportunities for them, not necessarily eliminating them,


Jim Richburg

Renee and Jonathan, I’d be curious what your experience has been. I have not yet encountered an organization that said, “I brought in AI and I reduced the number of SOC analysts I had.” You’re right, Renee, it’s augmentation. It’s like, “Look, my people are so much more effective. They’re focusing on the tier three alerts, because I’m able to automate certainly the tier ones. I have yet to find an organization that said, “I downsized my workforce consciously.” No, maybe they were—they had vacancies they couldn’t fill, but I have yet to find an instance where they said I replaced the humans with the machine.


Jonathan Nguyen-Duy

No, I’ve never seen that either. I think the volume of requirements right now the workload of SOCs is increasing, so I’ve never seen anyone actually downsize. I think it’ll be a lens of complexity. I think a lot of people think it’s overwhelming. You know, that’s one aspect. The other aspect is that what the average lifespan of a CISO today is, what, 14 to 18 months. In many cases, you’re constantly changing your word. The practical reality is, AI is a lot more approachable than what people think. Maybe it’s not about deploying an AI platform into your own organization, but thinking about the products and solutions that you utilize that have AI baked into it, so that you’re not making that investment in managing a platform, hiring data scientists, and doing the training yourself. Thinking about using something like a Fortinet security fabric where all the elements are enhanced by AI, so you take very proven, immature technologies and prevention detection response, but you marry that with AI, and you get a next iterative level of performance. That is seamless to that end user, so that’s how one way why we manage complexity. Back to the original notion that it’s a little bit misunderstood about how readily approachable and available it really is to any enterprise today.


Jim Richburg

Most of it is not a standalone product, as Jonathan said, it’s integral to the actual products. To the point Renee made, you’re not ceding control to the machines. This is not bringing Terminator-style Skynet on line and saying, “Now, the machine is managing my network defense.” The human sets the rules of engagement, the human sets the policy parameters to say, “If you see this, let it go. If you see that, alert on it. If you see this, blocking quarantine.” You set the rules within which the machine operates, so it’s integral, and you’re not basically turning control of the network over to the machine.


Britt Erler

Now, Renee had mentioned earlier in the conversation about malicious actors and their use of AI and Machine Learning. Let’s talk a little bit more about that—what we’re seeing in the industry, how they are impacting it, and how executives and also organizations can be better prepared.


Jim Richburg

Here’s the one that scares me because of course, we’re all staring down the barrel of ransomware as a growing threat. The reality is, the overwhelming majority of ransomware gets introduced to an organization through successful spearfishing, so it’s compromising the user via email. There’s the technical measure, but we need the human firewall—users have to get smarter at not clicking. The area of AI and Machine Learning enabling the attacker that scares me the most because it is right here in front of us is the ability of AI and ML to help on the content generation for spearfishing. There are open source packages you can download from GitHub that can degenerate text from a small sample that an AI expert can’t tell didn’t come from someone else. As a matter of fact, there was a well-known study last year where a member of one of these AI chat forum decided to unleash a bot—one of these AI engines on it—and he was literally just scraping content that was posted within a month. This bot base author was one of the most popular new posters on the website even fooled the AI experts. Imagine now the malicious actor not only takes your credentials in the address book from your email, they take your inbox and your outbox. Now the things they spam your address list with are customized automatically in terms of content cadence, the way you talk to each of those people. This is not going to look like the boilerplate bank scam things where you say something feels wrong in this email, it doesn’t feel wrong.


Renee Tara

I agree with Jim. I think a lot of it also is, we’re seeing them using, definitely more ransomware attacks, because again, they’re cheap, they’re easy, and they’re effective, and so they’ll continue to use that. I also think in today’s environment, adversaries are exploiting the fact that we have more remote workers than ever before. It’s really coming down to ensuring that you know who and what is accessing your network. Having AI driven solutions like network access control, cyber professionals can really achieve that clear visibility into every device access in that network at any time. It also simplifies network management, because again, we’re struggling with maintaining the complexity and the growing spanning environments that we have. Having been able to do those alerts in an automated process. Again, it’s ultimately going to make the teams or IT and security teams better at making decisions, more proactive, and ultimately, be cost-effective.


Jonathan Nguyen-Duy

I was wondering why I was getting those emails from that Nigerian Prince in a logger, so I’m glad to see that that’s improving. Likewise, when I talked to our friends over at FortiGuard Labs, they’re telling me about how the adversaries are utilizing AI to really deconstruct software, to really understand the vulnerabilities, and develop attacks against those vulnerabilities. Meaning, the rule of thumb is that for every 25 lines of code, there is a bug and software development today is highly disaggregated. Think about all the vulnerabilities that are in the software we’re utilizing today. Think about an adversary community that uses a dark web for formal and informal teaming arrangements, if you will, to create a business, so where is the service as it were, and then to use that to deconstruct the software that we all rely upon. The big brands that are so vested in all the infrastructure around enterprises and public sector agencies, and then discern vulnerabilities within that, and then generate new techniques, tactics, and procedures and weapons to exploit them. I think that’s what we’re really facing here is that you’ve got a dark web marketplace that’s incredibly efficient at bringing together threat actors and resources to launch highly targeted attacks. We’ve gone from spear fishing to big game hunting, but also targeted campaigns of attacks. The thing that I think about a lot right now is deep fakes, and Jim alluded to that, the ability to not only say, “Hey, that’s how Jonathan writes his emails, that’s his syntax, those are the pronouns he likes to use,” but then also the use of video imagery like here to create all types of communications. I think that’s where we’re going, and you’re going to be able to need AI to discern those minute differences to detect what’s anomalous and what’s malicious.


Renee Tara

What Jonathan said—I think he nailed it. From my perspective, there is more connectivity between the adversaries and ever before. There’s a low barrier to doing malicious activities, because a lot of the tools and techniques they’re using their buttons are sold on the dark web, it’s like now a commodity. You don’t necessarily need sophisticated programming skills, you can buy a lot of these attacks. They’re developing platforms where you can check the sign of whatever malware technique that you want to be used to automate a lot of their attacks. From my perspective, traditional organized crime used to be segmented by geographical locations. Well, in the digital cyber world, there’s no norm or many boundaries. That actually allows different organizations to collaborate, share, unfortunately, their malicious techniques and tactics. That’s why it’s now more important that again, I always say, “You never want to take a knife to a gunfight.” If their adversaries are using these technologies advancements against us, we need to also be incorporating them into our tool bags.


Jim Richburg

We’re gonna give you one more example of adversarial use. Jonathan talked about how they’re using it to find the vulnerabilities in our code. They’re using it to optimize the effectiveness of their own code. They can say, “Okay, let’s look at our existing inventory Renee talked about. They’ve basically got consortia, these broad platforms.” They look at it and go, “Which of these has got a good batting average for penetrating your target? Which of these has got a good ability to move laterally? Which of these has got a really elegant, clandestine way to exfiltrate?” They can literally take the high performing modules from different pieces of malware, and recompile them into a single piece of code, I say, it’s as if Dr. Frankenstein had said, “You know, I want to design an athletic monster, I’m going to the graveyard of Olympians and dig up the body parts from a sprinter for the legs and lungs from America.” That’s what AI is enabling the attackers to do to improve their own tradecraft and code.


Britt Erler

As we wrap up this conversation today, talking about all the benefits of implementing this into an organization, there are so many companies that are either just the beginning stages and thinking, “Okay, I need to make this move, I need to make this a top priority.” In some organizations that are kind of stuck in the middle of thinking, “Okay, how do I implement this and move the business forward?” Final pieces of advice on what they should look for when implementing AI tools.


Renee Tara

From my perspective, first, the tools really need to be able to give you that the broad visibility. They also have to be integratable. They need to be able to work and play well within your environments. You also got to be able to demonstrate a return on investment, kind of like the example that Jonathan gave with our Fortinet AI. Companies can see a dramatic decrease, of the man hours been decreased over a 50 week period. From my perspective, again, it’s got to be broad, integratable, but it’s also got to be showing that there’s a return on the investment.


Jonathan Nguyen-Duy

Not all AI is created the same, so take a look and see whether it’s supervised or unsupervised learning to meet your right application or your use case, also take a look at how long it’s been in existence. How long has it been in development? How proven is it? How many billions of nodes does it have? Is it above 10, 12, 13 billion nodes in that artificial neural network? Remember, the thing about AI, it’s only as good as its training. Not only are you looking for that pre-packaged experience and training, but you’re also looking for something that when it gets into your environment, can leverage self learning capabilities that are unique to yourself, and then look for a vendor that’s continually updating that pool of knowledge, if you will. That’s how you begin to differentiate. Then the other important thing is what Renee said—how readily adaptable and integrated is to your current solution set. Great intelligence and great insight is wonderful, but if it can’t be implemented, it’s kind of useless.


Jim Richburg

I suspect, if you’re an organization, as you described Britt, that’s just looking at getting into this, you’re not going to be buying a standalone AI product. There actually are relatively few of those. There’s not the ACME universal cyber integrator, what you’re going to be doing is looking at AI as a dimension of some capability that you’re planning to introduce or upgrade in the infrastructure now. The questions that Jonathan was teeing up. This has AI enriching, then you start asking, what is the training data? How many modes? The reality is, it’s a little bit like saying I want to buy a car, and then you may start asking questions about the engine. You don’t just say, “Hey, I want to get an engine, I’ll figure out where I want to put it around my house.” I think an organization is probably not going to start right off by looking for a standalone turnkey AI solution. These are all good questions about how they can enrich some existing part of their infrastructure, their security fabric approach with AI, and then differentiate between the various vendors in the market. The reality is, this is transformational, that everybody is doing it now, but it’s not an equal race, they didn’t all start from the same place, and frankly, they’re not all spending the same level of resources against it.


Britt Erler

Any final pieces of advice or insights for the Executives that are watching today that are looking to implement this into their organization?


Jonathan Nguyen-Duy

Final bit. In the aftermath of any major disruption or breach, there are four questions that are asked: What did you know? When did you know it? What did you do about it? And was it reasonable? II think from this sea level, board level decisions, that we know how dangerous that operating environment, that threat landscape really is. We understand that legacy tools and techniques really haven’t been able to address that. We now know that AI is critical towards getting things right. I think it’s becoming part of reasonable care, and so just about every security professional needs to think about. Do I have it, if I don’t have it, when will I implement it? I think it’s basic to delivering a reasonable level of care and security.


Jim Richburg

I think it’s, again, it’s going to be integrated into the things that you’re buying in your security ecosystem. From my perspective, a matter of saying, “There’s AI capability in what I’m bringing online, how can I leverage that? How can I use it in my SOC? Can it help me, as an Executive?” Sometimes. the same capability that gives you better defense, gives you better measurement. Again, I talked about metrics is the Achilles’ heel. We measure for three reasons: How am I doing? Can I prove it? How can I do better? AI, by being able to correlate this vast sea of data, can actually give you a starting point, a leg up on all three of those questions.


Renee Tara

I think I’ll ground out—come knowing what Jim and Jonathan said. I think, AI has got to be a part of your security strategy going forward. It’s the only way you’re going to evolve, and especially if you want to be that that proactive versus reactive. Former director of the FBI, Director Mueller say there’s two types of companies: those that haven’t hacked and those are going to be hacked. From my perspective, if you want to take that proactive approach, then you need to be leveraging these tools like AI in your environment. That really means you’re making sure you’ve got the right strategy, looking at the capabilities, but also making sure that you’re selecting the right vendors and partners to work with to help you on that AI journey.


Britt Erler

Jim, Rene, Jonathan, it has been a pleasure. Thank you so much for providing your insights today for our audience. Thank you to everyone who has joined us as well. I’m sure you will all have further questions for our Executive Speakers—not to worry, we will have a discussion forum underneath this presentation where you can ask questions and comment as much as you like. Thank you again for being here and enjoy the rest of our CIO VISION Leadership Virtual Summit.


Renee Tara

Thank you.


Get full Q/N Access

Sign up to Q/N with a few details to watch this presentation.

  • Hidden
  • Hidden