Stopping cyber criminals is difficult enough as is before you begin talking about adding AI into the equation. But we know that AI is increasingly common in cybersecurity practices and that it will only become more prevalent with the passage of time. Still, these challenges stand between now and a time when AI is synonymous with cybersecurity, at least more than it already is.
These cybersecurity professionals shared their view of the greatest challenges to AI adoption. Here's what they said:
1. Omar Yaacoubi, co-founder and CEO of Barac
“Although AI in cybersecurity is still in its early years, as more and more AI solutions materialize, IT departments will face the challenge of convincing their superiors that, firstly, it’s worth investing in AI solutions and, secondly, that they’re investing in the right one.
Because AI in cybersecurity is a relatively new thing, many organizations don’t yet have a designated budget or slot for this type of solution, having instead laid out their plans for traditional security protocols like anti-virus and firewall protection. As new threats emerge every day – including cybercriminals manipulating AI for their own benefit – organizations will also need to look to new solutions. IT departments will have to convince those in charge of the purse strings to loosen them a little to incorporate new AI-based solutions.”
2. Eyal Benishti, founder and CEO of IRONSCALES
“In a constantly shifting threat landscape, there isn’t a silver bullet solution. However, AI will be part of the equation to address endpoint, email, network and web security. As organizations adopt AI, CISOs and SOC managers will demand that AI security solutions provide validation and rationales for the analysis generated by the algorithms. Just as it will be some time before we see widespread adoption of autonomous cars, it will take time to realize an autonomous vision for AI-powered mail security.”
3. Mike MacIntyre, Chief Scientist, Panaseer
“The algorithms embedded in many modern security products could, at best, be called narrow (or weak) AI. They perform highly specialised tasks in a single (narrow) field and have been trained on large volumes of data, specific to a single domain. This is a far cry from general (or strong) AI, which is a system that can perform any generalised task and answer questions across multiple domains. Who knows how far away such a system is (there is much debate ranging from the next decade to never) but no CISO should be factoring such a tool in to their three-to-five year strategy.
Another key hurdle that is hindering the effectiveness of AI is the problem of data integrity. There is no point deploying an AI product if you can't get access to the relevant data feeds or aren't willing to install something on your network. The future for security is data-driven, but we are a long way from AI products following through on the promises of their marketing hype.”
4. Stacy Stubblefield, co-founder and Chief Innovation Officer at TeleSign
“AI is not a one-size-fits-all technology. Each client has its own particular need for AI, and each provider has their own specific technology, and series of APIs, unique to their business. Additionally, cybersecurity threats continue to evolve, forcing companies to continue to update their technology to protect against each threat. While it’s being utilized by bad actors, it’s also helping data scientists and developers improve user experience, increase revenue, grow users and—in the security space—improve fraud detection and prevention products and solutions.”
5. Einaras von Gravrock, CEO of Cujo AI
“Even though AI is not a new field, it can be genuinely challenging to find the right experts that can create, train, and manage AI algorithms. Cybersecurity faces a shortage of experienced professionals, and AI expertise makes this issue even more complex. Many top universities and leading companies have already started closing this gap by creating new training programs. However, the adoption of AI today is slowed because these new experts will need to gain practical experience.”
6. J.J. Guy, COO of JASK
“Changing processes that have already been in place. While the change is for the better, adding AI into the enterprise defense mix forces an organization to rethink how they use and deploy people, processes and technology to protect their assets. With automation taking care of more mundane work, security teams need to shift their focus to higher level duties that add value elsewhere – which can be a big adjustment.”
7. Aby Varghese, Chief Technology Officer at UIB
“While AI models are still evolving (they’re not yet completely “done”), the #1 challenge to AI adoption in cybersecurity is the limited number of people with the needed AI skills. You need to be at the top of your game to work with cybersecurity.”
8. Dr. Murat Kantarcioglu, Professor of Computer Science at The University of Texas at Dallas
“High false-positive rate (i.e., the problem of too many alarms raised by the AI system) is the number one technical challenge for AI adoption. This could result in the ignorance of the alerts due to too many alerts raised by the AI system. This in return delay the detection of attacks. In addition to the technical challenges, the new mindset that requires humans and AI to work together effectively makes the widespread adoption harder.”
9. Anuj Goel, CEO and co-founder of Cyware
“-Cost of implementation
-Time taken to train a system
-Level of complexity to effectively implement
-As new threats emerge, security solutions that use artificial intelligence have to be re-trained in order to keep up.”
10. Raul Popa, CEO of TypingDNA
“We've learned lately that very accurate models can be fooled quite easily, and this is true pretty much everywhere where AI is being employed. Unfortunately in Cybersecurity, we have to be more careful. For example, I'm sure everyone heard about “adversarial glasses”, these are engineered paper glasses that make you look like somebody else. Imagine this technology being used by criminals.
This is just one example but in general, if you have access to algorithms and internal ML models you can use brute force to create adversarial samples that will help you fool the system later on through so-called “adversarial attacks”. This is just one of the reasons why AI and Cybersecurity are being kept at a reasonable distance even by experts. General fear of AI is also a consistent problem, people fearing face recognition, behavior analysis, and other such AI systems leads to a slower pace of adoption, and in some cases, we may even see steps back.”
11. Joshua Crumbaugh, Chief Hacker/CEO at PeopleSec
“New technology requires new budgets and companies don't like to spend money on tools that they either underbudgeted for or didn't budget for at all. We need cybersecurity leaders in our enterprises to be less risk-averse in their spending in order to cultivate and nourish these innovative new AI cybersecurity startups.
One thing I know for sure is that what we're doing isn't working. Small improvements on the old technologies aren't going to solve the cybersecurity crisis. The foundation of cybersecurity was based on the laws of war and traditional laws of warfare don't always apply in cybersecurity. For example, building a bigger and better wall has never stopped a hacker. These fundamental misunderstandings that shape the construct of the cybersecurity industry are the reason we are spending around 10% more each year and getting around 10% worse.”
12. Kevin Landt, VP of Product Management at Cygilant
“Investigation and remediation are still challenges for AI to handle. AI is great at recognizing suspicious activity, but right now we still need good security analysts to work forward and backward through the forensic evidence to find the entry point and track down the attacker at different points in the networks. AI provides a great starting point for threat hunting, but can't fully automate the process yet.”
13. Emma Maconick, Partner in the Intellectual Property Transactions Group at Shearman & Sterling
“In the early phases of system implementation, AI in cybersecurity can produce false-positive alerts, which can deter teams from more fulsome integration and adoption of AI. AI’s effectiveness in cybersecurity is only as good as the data used to train the system. Especially in the implementation and start-up phases, security teams need to closely monitor and manage AI in cybersecurity systems to identify those false positives and fine-tune the system’s output.”
14. Chris Day, Chief Cybersecurity Officer for Cyxtera
“The #1 challenge is lack of credibility and proven results. Enterprises are hesitant to turn over critical detection problems to unproven technologies. Another challenging aspect is AI often is seen as a black box, and how it works is opaque to the user/operator which makes trusting it difficult for many.”
15. David Chavez, Vice President of Avaya Incubator at Avaya
“AI is only as good as its training data. Novel attack methods or first of its kind exploits will easily be missed by an AI. Threats, where malware is lingering in the system waiting for a trigger, may elude detection until the damage starts in earnest. AI training sets can be biased based on the data selected or collected and this is usually corrected with the involvement of data scientists, but such scientists need to be domain experts. Additional skill sets in AI mean that experts are scarce and expensive which can limit the breadth of adoption. A true test will be for early adopters to be judicially investing in employing and training future AI and Data experts in R&D while there is little immediate or short-term return on investment.”
16. Carl Hasselskog, co-founder and CEO fo Degoo
“The challenge with AI lies in its implementation. Many companies understand the need to integrate the technology to benefit users, but they just as well do so from their own vantage point and neglect their own customers’ priorities in the process. The technology in Apple’s voice assistant Siri, for example, is underused compared to Amazon’s Alexa because users feel awkward talking to themselves in public, and would rather use the voice technology in the privacy of their own homes. Companies need to empathize with their users when implementing and ask “will this AI component make their lives easier?”
17. Rodrigo Orph, co-founder of CVEDIA
“Legacy compatibility is a huge challenge to AI adoption in cybersecurity. Developing AI that meets the security needs of one system is complex enough, but then you need to factor in things like Linux systems, old Windows versions, and customized systems. It's extremely difficult to gather enough testing data for every possible configuration, so there are potential security holes.”
18. Chris Bates, VP of Security Strategy, SentinelOne
“Many people aren’t sure how to evaluate AI solutions and they ask the wrong questions. For example, many people want to know if a solution uses a particular algorithm or uses deep learning, but the truth is that optimal algorithm how to tune it can be determined using a simple brute force search. In other words, it's not hard to try many different combinations and see what works best. While machine learning expertise is certainly valuable, and you can get a lot of benefit from knowing how to explore and manipulate your data, the real “secret sauce” is the data itself — both the raw files and the extracted features. You need to put a lot of effort into collecting a large, realistic, varied, and unbiased dataset as well as engineering insightful features which helps our AI algorithms fit better models.”
Have expert insights to add to this article?
Share your feedback and we'll consider adding it to the piece!ADD YOUR INSIGHTS