Today at Google Cloud Next, the company announced several new generative AI enhancements to its security product line to make it easier to find information from massive security data by asking plain language questions.
These new capabilities are designed to do more with less, according to Google cloud security UX head Steph Hay. “We’re really trying to supercharge security with generative AI to mitigate threats, and in particular prevent downstream impacts that our practitioners face today, to reduce the toil that the security teams deal with having to manage a growing attack surface, and really bridge the cyber talent gap,” Hay said at a press event last week.
“AI is enabling security teams to improve their security posture by generating AI summaries to describe threats, searching for patterns in security data to identify if teams or companies have been targeted, and recommending actions to take both in response to active threats and to proactively improve security posture,” she said.
First off, Last year, Google bought security intelligence tool Mandiant for $5.4 billion, a steep price, but it gives customers valuable security threat data they can use to defend against attacks. But it’s usually a lot of data, and even a skilled professional can’t find the nuggets that matter most to your organization.
Duet AI in Mandiant Threat Intelligence helps security teams understand the mass of information they are seeing by providing a relevant summary to quickly grasp a threat’s nature. The depth and quality of the summaries and how well less skilled analysts can understand the information will determine its usefulness.
Duet AI for Chronicle Security Operations helps teams ask deeper questions about whether a threat is a threat to your company and, more importantly, how to respond to it without knowing the tool’s syntax. These answers may be useful if the practitioner asks good questions and the model provides a good summary and recommendations.
Finally, Duet AI in Security Command Center allows less experienced security analysts to ask questions about security findings, potential attack paths, and proactive actions to understand the threat to the company’s operations.
These features use generative AI to help teams understand security threats, especially newcomers. Depending on the answers, it could improve every analyst.
The hallucination problem, where large language models make up things when they don’t know the answer, could be a major security issue, but Nenshad Bardoliwalla, AI/ML product Leader for Vertex AI at Google Cloud, suggests providing a smaller data set based on these security tools.
“We believe that a comprehensive set of grounding capabilities on authoritative sources is one way that we can control the hallucination problem and make these systems more trustworthy,” Bardoliwalla said.
The three security-related Duet AI products are in preview and will launch later this year.