GPTs have one major vulnerability. Anyone can do some prompt injection attacks and trick your GPT to reveal its inner working, logic and any other confidential details in your design.
Once people know them, it's super easy to replicate the exact behavior as your GPT.
In order to protect your GPT from such an attack, it's important to put some guard rails into your instructions and thoroughly test your GPT with various attacks.
To make this whole process easier for all GPT creators, I made a new GPT called "Cyber Sentinel", your expert AI prompt security testing companion.
You can ask Cyber Sentinel to "test this custom GPT..." and provide basic details of the GPT and its features.
Cyber Sentinel will give its test cases. In case your GPT fails the evaluation, you can ask for appropriate suggestions or instructions to mitigate the same.