
Microsoft on Tuesday announced a chatbot designed to help cybersecurity professionals understand critical issues and find ways to fix them.
– Advertisement –
The company has been busy fortifying its software with artificial intelligence models from OpenAI after OpenAI’s ChatGPT bot captured the public imagination following its November debut.
The resulting generative AI software can be “usefully wrong” at times, as Microsoft pointed out earlier this month when talking about new features in Word and other productivity apps. But Microsoft is forging ahead nonetheless, as it seeks to grow a cybersecurity business expected to yield more than $20 billion in 2022 revenue.
– Advertisement –
Microsoft Security Co-Pilots GPT-4, OpenAI’s latest big language model — in which Microsoft has invested billions — and a security-specific model built by Microsoft using daily activity data it collects. The system also knows the security environment of a given customer, but that data will not be used to train the model.
The chatbot can create PowerPoint slides summarizing security incidents, describing the risk of an active vulnerability, or specifying the accounts involved in the exploit in response to a text prompt typed in by a person.
– Advertisement –
A user can press a button to confirm the answer if it is correct or select an “off-target” button to indicate a mistake. Vasu Jakkal, corporate vice president of security, compliance, identity, management and privacy at Microsoft, told CNBC in an interview that such input will help the service learn.
Engineers inside Microsoft have been using the safety co-pilot to do their jobs. “It can process 1,000 alerts and tell you two critical events in seconds,” Zakkal said. The tool also reverse-engineered a piece of malicious code for an analyst who didn’t know how to do it, he said.
This type of assistance can make a difference to companies that struggle to hire experts and end up hiring inexperienced employees in certain areas. “There’s a learning curve, and it takes time,” Zakkal said. “And now the skills built into the safety co-pilot can enhance yours. So it’s going to help you do more with less.”
Microsoft isn’t talking about how much Safety Co-Pilot will cost when it becomes more widely available.
Zakkal said it is expected that many employees of a company will use it, not just a few executives. This means that over time Microsoft wants to enable the tool to be discussed in a wider variety of domains.
The service will work with Microsoft security products such as Sentinel to monitor threats. Jakkal said Microsoft will determine whether to add support for third-party tools like Splunk over the next few months based on input from early users.
Frank Dixon, group vice president of security and trust at technology industry researcher IDC, said that if Microsoft required customers to use Sentinel or other Microsoft products if they wanted to turn on Security CoPilot, it would greatly influence purchase decisions. can affect in any way.
“For me, I was like, ‘Wow, this might be the biggest announcement in security this calendar year,’” he said.
There’s nothing to stop Microsoft’s security rivals, like Palo Alto NetworksReleasing its own chatbot but getting out first means Microsoft will have a head start, Dickson said.
Security Co-Pilot will be available to a small group of Microsoft customers in a private preview before a wider release at a later date.
Watch: Microsoft threatens to withhold data from rival AI search tool