Advanced
AI is evolving so fast, that it's leaving a trail of security flaws everywhere. 💉 From prompt injection attacks, just by receiving an email, to OpenAI's ChatGPT Atlas AI powered browser, to SaaS enabled applications such as Copilot Studio ... what can possibly go wrong !?
⚠️The risks are high because companies are eager to roll-out new AI features and users are attracted to shifting their workloads to them.
☠️In the middle, security is -at best- based on "traditional" measures that don't assume a potentially active content that is triggered by an AI feature.
🔥The AI revolution is extending the attack surface, prioritizing the advantages to the end-users ... ordinary users and bad actors alike.
🐌But security, embedded in the AI feature itself, eg. to protect against malicious prompt injection, is lagging behind.
⚡When introducing such innovative AI solutions in the Enterprise, always consider the worst-case scenario during the Risk Assessment.
❓Then make the decision if the benefits of that solution outweighs the risks -present today- when making that AI feature available to all your employees.
Exploit of Microsoft Copilot Studio 👉 https://lnkd.in/e2u8DDp6
ChatGPT Atlas Browser Jailbroken to Disguise Malicious Prompt as URLs 👉 https://lnkd.in/eRc6yDg5

