Using Azure Open AI with Microsoft Sentinel Part 1 - Getting Keys and Endpoints
Making Calls, Taking Names
If you’re currently using the Logic App connector for Open AI ChatGPT, you might consider moving to Azure Open AI’s implementation of ChatGPT once you gain access. I don’t want to start the argument over which is better, or which is more secure, but personally I feel more comfortable working within the confines of the Azure guardrails and working with data that I can form to my own purposes.
So, I’ve started working toward developing all Microsoft Sentinel activity toward Azure Open AI models. As I get deeper into it, I’ll put together some additional notes in the series and post them here.
I probably won’t dig deep into Azure Open AI, how to deploy models, and the like, (except where things need to be highlighted that aren’t clear in the Docs) so for that, check out the following Docs:
One immediate thing that needs to be highlighted, though, as you start creating your own deployments, deployments happen in the Azure console not in Azure OpenAI Studio. For whatever reason (I believe it’s an identity access thing - just haven’t figure it out yet), creating new deployments in Azure OpenAI Studio fail, while they work just fine in the Azure console. Also, one more thing, patience is key here. It takes a bit after a new deployment until the API is accessible. I had an instance where it took 15 minutes or more and thought instead, I was doing something wrong. After waiting a bit longer (insert Ace Ventura quote here) everything just started working.
But, to get you started on your way utilizing the Azure Open AI ChatGPT deployment, you’ll need the API key and endpoint. Here’s where to find them.
Keys
The API keys are located in the Azure console in your specific instance you create in Cognitive Services | Azure OpenAI. You can see in the image below, you have two keys, but only need one.
Endpoint
The full and proper Endpoint is found in Azure OpenAI Studio, which can be accessed directly from the link in the Azure console as shown in the image.
Once in the Azure OpenAI Studio, open the GPT-3 Playground model you have deployed and click the View Code option. Switch the Sample Code to curl (it defaults to python) and copy and paste the URL (Endpoint) provided.
This URL is specific to your deployment. #1 in the image above is your Open AI instance name (back in the Azure console), and #2 in the image above is your Deployment name.
So, full Endpoint would look like the following…
https://<your_instance_name>.openai.azure.com/openai/deployments/<your_deployment_name)/completions?api-version=2022-12-01
Logic App
In the Logic App/Playbook in Microsoft Sentinel, make sure to use the best practices method of utilizing Parameters to store your API Key and Endpoint.
Once stored in a Parameter, you can then call those parameters when generating variables as shown in the next image.
These variables can then be used to access the Azure Open AI deployment in the rest of the Logic App logic.
P.S. Though GPT-4 has been released and I have access to it, I’m sticking with GPT-3 for now. All the above is based on a GPT-3 deployment. The reason I’m sticking with it is that are some changes and nuances that I’ve not tested yet and some existing GPT-3 calls that failed on GPT-4. I’ll circle back eventually to talk about GPT-4 once I’ve done more testing and figure out the adjustments.
[Want to discuss this further? Hit me up on Twitter or LinkedIn]
[Subscribe to the RSS feed for this blog]
[Subscribe to the Weekly Microsoft Sentinel Newsletter]
[Subscribe to the Weekly Microsoft Defender Newsletter]
[Learn KQL with the Must Learn KQL series and book]
Hi! I see you mention the Logic App, but how did you set up the Logic App connector for Open AI ChatGPT?