I read a very good article this morning by Simon Willison about the implications of LLMs on security, which made me think quite a bit more about the implications of prompt injection on the usability of AI LLMs going into the future. Prompts === DataAs Simon says, the main issue that underlies all of this is the inability the current architecture of LLMs to separate the task being given (eg. âsumma
{{#tags}}- {{label}}
{{/tags}}