Some of you know this. But I co-host the Microsoft Security Insights show each week. I also help maintain the production, post-production, marketing and the other miscellaneous stuff for the show.
Last week, we had a popular topic on using Azure OpenAI with Microsoft Sentinel. Our guest, Angelica Faber, is doing some amazing work with partners around this topic.
If you missed this, catch-up here: Microsoft Security Insights Show Episode 167 - Angelica Faber
One glaring, open-ended question was raised in my mind during the episode: are there things a SOC needs to do before implementing Generative AI? And more the point: How can a SOC prepare for Generative AI?
Generative AI has a lot of potential uses for a SOC. During the MSI Show episode, we focused primarily on incident and alert enrichment. And that’s a very quick, easy win for those organizations that want to test viability of AI use in the SOC. But are there things that must be done first to ensure that a) a SOC can take full advantage of Generative AI, and b) that testing will enable exposure to full capability for a well-grounded review?
Here’s a few things that I immediately considered myself - and really, these are all a standard framework for any SOC to ensure efficiency and effectiveness. But, alas, many security teams are busy enough with security events that focusing on SOC operations is something that doesn’t happen too often.
Here’s my preparation thoughts:
Configure Incident tags to give ChatGPT more information.
Assign specific Incidents to specific analysts (based on skillsets or availability, etc.) through Automation Rules.
Practice gathering information to build better Generative AI prompts.
Configure and actually use TI.
As you can probably tell, these are all tainted with Microsoft Sentinel terms (Incidents, Automation Rules, etc.), but the concepts will apply across any security tool of its type.
This list will definitely grow, but have more? Let me know in the comments here or on X (@rodtrent)
We didn’t get a chance to cover this topic during the MSI Show, but in an upcoming episode of the After the Blog podcast, Angelica and I will be going deeper into this discussion. Stay tuned for that.
Additionally, there’s the other side of the coin, i.e., how to monitor and secure AI. That is a constantly expanding discussion. But you can get started with the Must Learn AI Security series (https://aka.ms/MustLearnAISecurity).
[Want to discuss this further? Hit me up on Twitter or LinkedIn]
[Subscribe to the RSS feed for this blog]
[Subscribe to the Weekly Microsoft Sentinel Newsletter]
[Subscribe to the Weekly Microsoft Defender Newsletter]
[Subscribe to the Weekly Azure OpenAI Newsletter]
[Learn KQL with the Must Learn KQL series and book]
[Learn AI Security with the Must Learn AI Security series and book]