Microsoft unveiled Security Copilot today as part of its ongoing effort to incorporate generative AI into all of its products. This new tool promises to “summarize” and “make sense” threat intelligence.
Microsoft promoted Security Copilot in a vague statement as a mechanism to correlate attack data while prioritizing security problems. Several tools already perform this. Yet Microsoft claims that generative AI models from OpenAI, notably the recently released text-generating GPT-4, improve Security Copilot, which interfaces with its current security product range.
According to Microsoft Security senior vice president Charlie Bell, “improving the state of security involves both people and technology – human inventiveness matched with the most cutting-edge tools that help apply human experience at speed and scale.” We are creating a future with Security Copilot where each defender is equipped with the tools and technologies need to make the world a safer place.
However, Microsoft didn’t specify exactly how GPT-4 is used in Security Copilot. Instead, it highlighted a trained bespoke model, possibly GPT-4-based, that powers Security Copilot and “deploys skills and queries” pertinent to cybersecurity.
Microsoft emphasized that the model is not trained using user data, responding to a typical criticism of services using language models.
Microsoft believes that this unique model “capture what other approaches might miss” by providing security-related advice, recommending the best course of action, and summarizing activities and processes. Yet, it’s uncertain how effective such a model might be in production given the untruthful tendencies of text-generating models.
Microsoft acknowledges that the unique Security Copilot concept doesn’t always work perfectly. The company states that “AI-generated content can contain errors.” “We are changing its responses as we continue to learn from these encounters to generate more logical, relevant, and valuable answers,” the statement reads.