Back to feed

Changelog Update

In le Chat, we added a mitigation against an obfuscated prompt method that could lead to data exfiltration, reported by researchers Xiaohan Fu and Earlence Fernandes. The attack required users to willingfully copy and paste adversarial prompts and provide personal data to the model. No user was impacted and no data was exfiltrated. SECURITY