this post was submitted on 22 Oct 2024
6 points (100.0% liked)

Cybersecurity

1 readers
49 users here now

An umbrella community for all things cybersecurity / infosec. News, research, questions, are all welcome!

Rules

Community Rules

founded 2 years ago
MODERATORS
 

AI chatbots can be tricked by hackers into helping them stealing your private data.

Read more in my article on the Bitdefender blog: https://www.bitdefender.com/en-us/blog/hotforsecurity/ai-chatbots-can-be-tricked-by-hackers-into-stealing-your-data/

#cybersecurity #ai #llm

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 2 months ago

@[email protected] @[email protected] have you seen the work on using non printing characters to poison llm prompts and exfiltrate data from victims? Unicode is dangerous 🤪
https://jeredsutton.com/post/llm-unicode-prompt-injection/