Wouldn’t that pose unknown risk of sensitivity data getting exposed, let me put it this way. No database is unhackable as far as my knowledge is all about when and who. What’s your opinion?
In this case, you're only passing known content to ChatGPT anyway. So, you give it the table name (which is a common table) and what you'd like to see, i.e., (bad logins). ChatGPT doesn't touch the Sentinel instance. Data is just passed in the API through the Logic App connector.
You can improve the security of any sensitive data though by utilizing Azure OpenAI's ChatGPT instance instead of the public version.
Wouldn’t that pose unknown risk of sensitivity data getting exposed, let me put it this way. No database is unhackable as far as my knowledge is all about when and who. What’s your opinion?
In this case, you're only passing known content to ChatGPT anyway. So, you give it the table name (which is a common table) and what you'd like to see, i.e., (bad logins). ChatGPT doesn't touch the Sentinel instance. Data is just passed in the API through the Logic App connector.
You can improve the security of any sensitive data though by utilizing Azure OpenAI's ChatGPT instance instead of the public version.