Skip to content

[BUG]: Vision models don't retain memory of images past one prompt #2585

@sheneman

Description

@sheneman

How are you running AnythingLLM?

AnythingLLM desktop app

What happened?

When I upload a file, I can use a vision model like llama3.2-vision:11b to describe it, but then subsequent prompts don't have any memory of the image.

image

I would expect that I can ask repeated questions of the image and that it would remain in my current context until my context window was exhausted.

Are there known steps to reproduce?

No response

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions