Generative Ai Brings Cybersecurity Boost For Majority Of Experts

Generative Ai Brings Cybersecurity Boost For Majority Of Experts

A new report by CrowdStrike has found that 39% of security professionals believe the security benefits of generative artificial intelligence (AI) outweigh the harms. However, the majority remain cautious, with 64% having purchased or researched generative AI tools and only 6% actively using them.

Cybersecurity experts are increasingly turning to generative AI to improve their ability to respond to and defend against cyber threats. The technology can be used for brainstorming, research, or analysis, and can pull data from disparate sources into one window in various formats, shortening the time it takes to research an incident.

However, security professionals are also concerned about the potential risks of generative AI, including data exposure to the LLMs (large language models) behind AI products, attacks launched against generative AI tools, lack of guardrails or controls in generative AI tools, AI hallucinations, and insufficient public policy regulations for generative AI use.

To mitigate these risks, organizations must consider safety and privacy controls as part of any generative AI purchase. This can help protect sensitive data, comply with regulations, and prevent financial, legal, and reputational damage.

The top economic concern among respondents was quantifying ROI (return on investment) when adopting generative AI products. The next two concerns were the cost of licensing AI tools and unpredictable or confusing pricing models.

Security teams want to deploy generative AI as part of a platform to get more value from existing tools, elevate the analyst experience, accelerate onboarding, and eliminate the complexity of integrating new point solutions.

To leverage AI to protect against cyber threats, organizations can use generative AI for threat detection and analysis, automated incident response, phishing detection, enhanced security analytics, and synthetic data for training. However, without proper safeguards, AI tools can expose vulnerabilities, generate harmful outputs, or violate privacy laws, leading to significant consequences.

The report’s findings highlight the need for organizations to carefully consider the benefits and risks of generative AI before making a decision. By understanding these factors, security professionals can make informed choices about how to deploy this technology effectively and responsibly.

Latest Posts