feat: add @voltagent/redis memory storage adapter#1174
feat: add @voltagent/redis memory storage adapter#1174howwohmm wants to merge 1 commit intoVoltAgent:mainfrom
Conversation
Implements `@voltagent/redis` — a new StorageAdapter backed by Redis
(via ioredis) for fast, in-memory persistence of agent conversations,
messages, working memory, and workflow state.
Key design decisions:
- Conversations stored as JSON strings keyed by `{prefix}:conv:{id}`
- Messages in sorted sets ordered by timestamp for efficient range queries
- Working memory as simple key-value (conversation or user scoped)
- Workflow state indexed by workflow ID + global sorted set
- Suspended workflows tracked via Redis sets for fast lookup
- All serialization uses safeStringify (never JSON.stringify)
- Pipeline batching for atomic multi-key operations
New files:
- packages/redis/src/memory-adapter.ts — full StorageAdapter implementation
- packages/redis/src/index.ts — exports
- packages/redis/src/memory-adapter.spec.ts — 17 unit tests
- packages/redis/package.json, tsup.config.ts, tsconfig.json, vitest.config.mts
Closes VoltAgent#18
Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
|
📝 WalkthroughWalkthroughIntroduces a new Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can use TruffleHog to scan for secrets in your code with verification capabilities.Add a TruffleHog config file (e.g. trufflehog-config.yml, trufflehog.yml) to your project to customize detectors and scanning behavior. The tool runs only when a config file is present. |
There was a problem hiding this comment.
7 issues found across 7 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="packages/redis/src/memory-adapter.ts">
<violation number="1" location="packages/redis/src/memory-adapter.ts:84">
P2: Conversation creation uses non-atomic EXISTS-then-SET logic, so duplicate concurrent creates for the same id can both succeed instead of one throwing `ConversationAlreadyExistsError`.</violation>
<violation number="2" location="packages/redis/src/memory-adapter.ts:105">
P1: Multi-key Redis write batches ignore `pipeline.exec()` command results, so partial failures can silently leave records and indexes inconsistent.</violation>
<violation number="3" location="packages/redis/src/memory-adapter.ts:342">
P1: Working-memory Redis keys are built from optional IDs without validation, allowing `undefined`-scoped keys that can collide across callers.</violation>
<violation number="4" location="packages/redis/src/memory-adapter.ts:385">
P2: `queryWorkflowRuns` performs an unbounded full scan with sequential per-ID Redis reads, applying filters/pagination only after fetching everything, which can degrade badly at scale.</violation>
<violation number="5" location="packages/redis/src/memory-adapter.ts:418">
P1: `updateWorkflowState` can desynchronize Redis workflow indexes by allowing updates to index-driving fields (`workflowId`, `createdAt`) without reindexing.</violation>
</file>
<file name="packages/redis/src/memory-adapter.spec.ts">
<violation number="1" location="packages/redis/src/memory-adapter.spec.ts:93">
P2: Error-contract tests are too broad: `rejects.toThrow()` does not enforce the specific error type the test names claim to validate.</violation>
<violation number="2" location="packages/redis/src/memory-adapter.spec.ts:166">
P2: `deleteConversation` test does not assert `pipeline.exec()`, so it can pass even if queued Redis mutations are never executed.</violation>
</file>
Since this is your first cubic review, here's how it works:
- cubic automatically reviews your code and comments on bugs and improvements
- Teach cubic by replying to its comments. cubic learns from your replies and gets better over time
- Add one-off context when rerunning by tagging
@cubic-dev-aiwith guidance or docs links (includingllms.txt) - Ask questions if you need clarification on any suggestion
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| const existing = await this.getWorkflowState(executionId); | ||
| if (!existing) return; | ||
|
|
||
| const updated = { ...existing, ...updates, updatedAt: new Date() }; |
There was a problem hiding this comment.
P1: updateWorkflowState can desynchronize Redis workflow indexes by allowing updates to index-driving fields (workflowId, createdAt) without reindexing.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/redis/src/memory-adapter.ts, line 418:
<comment>`updateWorkflowState` can desynchronize Redis workflow indexes by allowing updates to index-driving fields (`workflowId`, `createdAt`) without reindexing.</comment>
<file context>
@@ -0,0 +1,524 @@
+ const existing = await this.getWorkflowState(executionId);
+ if (!existing) return;
+
+ const updated = { ...existing, ...updates, updatedAt: new Date() };
+
+ const pipeline = this.client.pipeline();
</file context>
| pipeline.zadd(this.key("convs:resource", input.resourceId), Date.now(), input.id); | ||
| pipeline.zadd(this.key("convs:user", input.userId), Date.now(), input.id); | ||
| pipeline.zadd(this.key("convs:all"), Date.now(), input.id); | ||
| await pipeline.exec(); |
There was a problem hiding this comment.
P1: Multi-key Redis write batches ignore pipeline.exec() command results, so partial failures can silently leave records and indexes inconsistent.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/redis/src/memory-adapter.ts, line 105:
<comment>Multi-key Redis write batches ignore `pipeline.exec()` command results, so partial failures can silently leave records and indexes inconsistent.</comment>
<file context>
@@ -0,0 +1,524 @@
+ pipeline.zadd(this.key("convs:resource", input.resourceId), Date.now(), input.id);
+ pipeline.zadd(this.key("convs:user", input.userId), Date.now(), input.id);
+ pipeline.zadd(this.key("convs:all"), Date.now(), input.id);
+ await pipeline.exec();
+
+ this.log("createConversation", { id: input.id });
</file context>
| @@ -0,0 +1,524 @@ | |||
| /** | |||
There was a problem hiding this comment.
P1: Working-memory Redis keys are built from optional IDs without validation, allowing undefined-scoped keys that can collide across callers.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/redis/src/memory-adapter.ts, line 342:
<comment>Working-memory Redis keys are built from optional IDs without validation, allowing `undefined`-scoped keys that can collide across callers.</comment>
<file context>
@@ -0,0 +1,524 @@
+ scope: WorkingMemoryScope;
+ }): Promise<string | null> {
+ const scopeKey =
+ params.scope === "conversation" ? `conv:${params.conversationId}` : `user:${params.userId}`;
+ return this.client.get(this.key("wm", scopeKey));
+ }
</file context>
| } | ||
|
|
||
| const results: WorkflowStateEntry[] = []; | ||
| for (const id of ids) { |
There was a problem hiding this comment.
P2: queryWorkflowRuns performs an unbounded full scan with sequential per-ID Redis reads, applying filters/pagination only after fetching everything, which can degrade badly at scale.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/redis/src/memory-adapter.ts, line 385:
<comment>`queryWorkflowRuns` performs an unbounded full scan with sequential per-ID Redis reads, applying filters/pagination only after fetching everything, which can degrade badly at scale.</comment>
<file context>
@@ -0,0 +1,524 @@
+ }
+
+ const results: WorkflowStateEntry[] = [];
+ for (const id of ids) {
+ const state = await this.getWorkflowState(id);
+ if (!state) continue;
</file context>
| // ── Conversation operations ────────────────────────────────────────── | ||
|
|
||
| async createConversation(input: CreateConversationInput): Promise<Conversation> { | ||
| const existing = await this.client.exists(this.key("conv", input.id)); |
There was a problem hiding this comment.
P2: Conversation creation uses non-atomic EXISTS-then-SET logic, so duplicate concurrent creates for the same id can both succeed instead of one throwing ConversationAlreadyExistsError.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/redis/src/memory-adapter.ts, line 84:
<comment>Conversation creation uses non-atomic EXISTS-then-SET logic, so duplicate concurrent creates for the same id can both succeed instead of one throwing `ConversationAlreadyExistsError`.</comment>
<file context>
@@ -0,0 +1,524 @@
+ // ── Conversation operations ──────────────────────────────────────────
+
+ async createConversation(input: CreateConversationInput): Promise<Conversation> {
+ const existing = await this.client.exists(this.key("conv", input.id));
+ if (existing) {
+ throw new ConversationAlreadyExistsError(input.id);
</file context>
| expect(mockPipeline.del).toHaveBeenCalledWith("test:msgs:conv-1"); | ||
| expect(mockPipeline.del).toHaveBeenCalledWith("test:steps:conv-1"); | ||
| expect(mockPipeline.zrem).toHaveBeenCalledWith("test:convs:resource:agent-1", "conv-1"); | ||
| expect(mockPipeline.zrem).toHaveBeenCalledWith("test:convs:user:user-1", "conv-1"); |
There was a problem hiding this comment.
P2: deleteConversation test does not assert pipeline.exec(), so it can pass even if queued Redis mutations are never executed.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/redis/src/memory-adapter.spec.ts, line 166:
<comment>`deleteConversation` test does not assert `pipeline.exec()`, so it can pass even if queued Redis mutations are never executed.</comment>
<file context>
@@ -0,0 +1,304 @@
+ expect(mockPipeline.del).toHaveBeenCalledWith("test:msgs:conv-1");
+ expect(mockPipeline.del).toHaveBeenCalledWith("test:steps:conv-1");
+ expect(mockPipeline.zrem).toHaveBeenCalledWith("test:convs:resource:agent-1", "conv-1");
+ expect(mockPipeline.zrem).toHaveBeenCalledWith("test:convs:user:user-1", "conv-1");
+ });
+ });
</file context>
| title: "Test", | ||
| metadata: {}, | ||
| }), | ||
| ).rejects.toThrow(); |
There was a problem hiding this comment.
P2: Error-contract tests are too broad: rejects.toThrow() does not enforce the specific error type the test names claim to validate.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/redis/src/memory-adapter.spec.ts, line 93:
<comment>Error-contract tests are too broad: `rejects.toThrow()` does not enforce the specific error type the test names claim to validate.</comment>
<file context>
@@ -0,0 +1,304 @@
+ title: "Test",
+ metadata: {},
+ }),
+ ).rejects.toThrow();
+ });
+ });
</file context>
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (7)
packages/redis/src/memory-adapter.ts (5)
210-219: Use pipeline batching foraddMessages.The sequential
awaitin the loop causes N round trips to Redis. Use a pipeline for better performance, consistent with other batch operations in this adapter.♻️ Proposed batched implementation
async addMessages( messages: UIMessage[], userId: string, conversationId: string, - context?: OperationContext, + _context?: OperationContext, ): Promise<void> { - for (const message of messages) { - await this.addMessage(message, userId, conversationId, context); + if (messages.length === 0) return; + + const pipeline = this.client.pipeline(); + for (const message of messages) { + const createdAt = (message as UIMessage & { createdAt?: Date }).createdAt ?? new Date(); + const entry = safeStringify({ + ...message, + userId, + conversationId, + createdAt: createdAt.toISOString(), + }); + pipeline.zadd(this.key("msgs", conversationId), createdAt.getTime(), entry); } + await pipeline.exec(); + + this.log("addMessages", { count: messages.length, conversationId }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/redis/src/memory-adapter.ts` around lines 210 - 219, The current addMessages method awaits addMessage in a loop causing N Redis round-trips; change it to build and execute a single Redis pipeline: create a pipeline from the Redis client, for each UIMessage enqueue the same Redis commands that addMessage performs (or factor out the low-level Redis command sequence into a helper and call it to populate the pipeline) including any score/member ZADD, HSET or EXPIRE operations and any conversation/message index updates, preserving the userId, conversationId and context behavior; then execute pipeline.exec() and handle/report errors similarly to addMessage so the batch is applied in one round trip.
53-67: Consider handling Redis connection events.The adapter doesn't handle Redis connection lifecycle events (error, reconnect, close). For production use, consider exposing error events or adding connection status tracking.
Would you like me to suggest an implementation with connection event handling?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/redis/src/memory-adapter.ts` around lines 53 - 67, Add Redis connection event handling to RedisMemoryAdapter: in the constructor after initializing this.client, attach listeners for 'error', 'reconnecting'/'connect'/'ready' and 'close' to update an internal connection status field (e.g., this.connected or this.connectionState) and to re-emit or surface errors via an EventEmitter stored on the adapter (e.g., this.events) so callers can subscribe; update the class signature to include the connection state field and an EventEmitter and ensure listeners use unique handler functions so they can be removed in a future close() or dispose() method on RedisMemoryAdapter (reference RedisMemoryAdapter, constructor, this.client, and add new this.events/this.connectionState and a close()/dispose() method).
375-398: N+1 query pattern inqueryWorkflowRuns.The loop fetches each workflow state individually (line 386), causing N Redis round trips after fetching the ID list. Consider batching the GET operations using a pipeline, similar to
getConversationsByIds.♻️ Proposed batched implementation
async queryWorkflowRuns(query: WorkflowRunQuery): Promise<WorkflowStateEntry[]> { let ids: string[]; if (query.workflowId) { ids = await this.client.zrevrange(this.key("wf:idx", query.workflowId), 0, -1); } else { ids = await this.client.zrevrange(this.key("wf:all"), 0, -1); } + if (ids.length === 0) return []; + + // Batch fetch all workflow states + const pipeline = this.client.pipeline(); + for (const id of ids) { + pipeline.get(this.key("wf", id)); + } + const pipelineResults = await pipeline.exec(); + const results: WorkflowStateEntry[] = []; - for (const id of ids) { - const state = await this.getWorkflowState(id); + for (const [err, data] of pipelineResults ?? []) { + if (err || !data) continue; + const state = this.deserializeWorkflowState(JSON.parse(data as string)); if (!state) continue; if (query.status && state.status !== query.status) continue; if (query.userId && state.userId !== query.userId) continue; if (query.from && state.createdAt < query.from) continue; if (query.to && state.createdAt > query.to) continue; results.push(state); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/redis/src/memory-adapter.ts` around lines 375 - 398, queryWorkflowRuns does N+1 by calling getWorkflowState(id) inside the loop; replace the per-id round trips with a batched Redis pipeline/mget to fetch all workflow state entries in one call (similar to getConversationsByIds). Build a pipeline for each id using the same Redis key used by getWorkflowState (e.g. this.key("wf", id) or the underlying storage command used by getWorkflowState), execute the pipeline, parse/deserialise results into WorkflowStateEntry, then apply the existing status/userId/from/to filters and offset/limit slicing; keep getWorkflowState for single reads but implement the bulk fetch inside queryWorkflowRuns to avoid the N Redis round trips.
62-66: Simplify redundant connection handling.Both branches create a
Redisinstance identically. TheRedisconstructor from ioredis accepts either a string URL or an options object, so the conditional is unnecessary.♻️ Proposed simplification
- if (typeof options.connection === "string") { - this.client = new Redis(options.connection); - } else { - this.client = new Redis(options.connection); - } + this.client = new Redis(options.connection);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/redis/src/memory-adapter.ts` around lines 62 - 66, The conditional creating the Redis client is redundant: both branches call new Redis(options.connection). Replace the if/else with a single instantiation (assign this.client = new Redis(options.connection)); ensure the constructor or surrounding code accepts options.connection as either string or config object and remove the unused conditional branch (symbols: options.connection, this.client, Redis).
237-243: Consider defensive JSON parsing.Multiple
JSON.parsecalls (lines 114, 238, 290, 321, 372, 469) lack try-catch. While data is written viasafeStringify, corrupted or manually modified Redis data could cause unhandled exceptions. Consider wrapping in try-catch or using a safe parse utility for resilience.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/redis/src/memory-adapter.ts` around lines 237 - 243, Wrap the JSON.parse call inside the entries.map in a defensive try-catch (or replace with a safeParse utility) so corrupted/manually modified Redis payloads don't throw; on parse failure for an entry, skip that entry (or return null) and filter out nulls afterward, and emit a warning via the module's logger (or console) including the raw entry to aid debugging; apply the same pattern to other parse sites (the other JSON.parse calls mentioned) so functions like the entries.map that builds messages and the UIMessage<{ createdAt: Date }> construction are resilient to bad data.packages/redis/tsconfig.json (1)
4-4: Consider removingdomfromlibfor a server-side package.The
domanddom.iterablelibraries are typically unnecessary for a Redis adapter that runs exclusively in Node.js environments. Consider simplifying to just["esnext"].♻️ Suggested simplification
- "lib": ["dom", "dom.iterable", "esnext"], + "lib": ["esnext"],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/redis/tsconfig.json` at line 4, Remove browser-specific libs from the TypeScript config: update the "lib" array in tsconfig.json (the "lib" property currently containing "dom" and "dom.iterable") to only include server-appropriate libs such as "esnext" (or "es2020"/similar) so the Redis adapter doesn't pull in DOM typings; edit the "lib" entry accordingly in the tsconfig.json file where "lib": ["dom","dom.iterable","esnext"] is defined.packages/redis/package.json (1)
9-13: Consider addingvitestas an explicit devDependency.The
vitest.config.mtsuses Vitest APIs, butvitestitself is not listed as a devDependency—only@vitest/coverage-v8is included. If the workspace provides Vitest at the root, this works, but adding it explicitly improves package portability and makes dependencies clearer.♻️ Suggested addition
"devDependencies": { "@vitest/coverage-v8": "^3.2.4", "@voltagent/core": "^2.4.4", - "ai": "^6.0.0" + "ai": "^6.0.0", + "vitest": "^3.2.4" },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/redis/package.json` around lines 9 - 13, Add "vitest" as an explicit devDependency in the package.json devDependencies block (alongside "@vitest/coverage-v8"), so the package doesn't rely solely on a workspace root for Vitest; pick a compatible version (e.g. "^1" or match the workspace root) and update the "devDependencies" object to include "vitest": "<version>" to ensure vitest.config.mts and any Vitest APIs resolve locally for this package.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/redis/src/memory-adapter.spec.ts`:
- Around line 173-186: The test uses the UIMessage type in the "adds a message
to the conversation sorted set" spec (the object passed to adapter.addMessage)
but UIMessage isn't imported; add a type import for UIMessage from the module
that exports it (use an import type { UIMessage } ... at the top of
memory-adapter.spec.ts) so the file compiles and the adapter.addMessage call
keeps its typed argument.
In `@packages/redis/src/memory-adapter.ts`:
- Around line 336-344: getWorkingMemory currently builds keys like
`conv:undefined` or `user:undefined` when required IDs are missing; add explicit
validation in getWorkingMemory (and mirror the same checks in setWorkingMemory
and deleteWorkingMemory) to ensure when params.scope === "conversation" that
params.conversationId is present (non-empty) and when params.scope === "user"
that params.userId is present; if the required id is missing, either throw a
clear Error (e.g., "Missing conversationId for conversation scope") or return
null and log appropriately, and only call this.client.get / set / del with a
valid key computed by this.key("wm", scopeKey) after validation.
- Around line 83-109: The createConversation method has a TOCTOU race between
the exists() check and pipeline.set; remove the pre-check and perform an atomic
"set if not exists" instead (use Redis SET with NX or SETNX via this.client or
pipeline) on this.key("conv", input.id) and, if the set fails (returns null/0),
throw ConversationAlreadyExistsError(input.id); only on successful atomic set
proceed to add the zadd entries (this.key("convs:resource", ...),
this.key("convs:user", ...), this.key("convs:all")) and exec the pipeline, keep
the same logging and return the conversation. Ensure you reference
createConversation, ConversationAlreadyExistsError, this.key("conv", ...), and
the zadd calls when making the change.
- Around line 411-416: The updateWorkflowState method currently returns silently
when getWorkflowState(executionId) returns null; change this to throw a
not-found error instead to match other adapters: in updateWorkflowState, after
const existing = await this.getWorkflowState(executionId); if (!existing) throw
a descriptive not-found error (e.g., reuse ConversationNotFoundError for
consistency with updateConversation or introduce/throw a
WorkflowStateNotFoundError) including the executionId in the message so callers
can handle missing workflow state the same way as other storage adapters.
---
Nitpick comments:
In `@packages/redis/package.json`:
- Around line 9-13: Add "vitest" as an explicit devDependency in the
package.json devDependencies block (alongside "@vitest/coverage-v8"), so the
package doesn't rely solely on a workspace root for Vitest; pick a compatible
version (e.g. "^1" or match the workspace root) and update the "devDependencies"
object to include "vitest": "<version>" to ensure vitest.config.mts and any
Vitest APIs resolve locally for this package.
In `@packages/redis/src/memory-adapter.ts`:
- Around line 210-219: The current addMessages method awaits addMessage in a
loop causing N Redis round-trips; change it to build and execute a single Redis
pipeline: create a pipeline from the Redis client, for each UIMessage enqueue
the same Redis commands that addMessage performs (or factor out the low-level
Redis command sequence into a helper and call it to populate the pipeline)
including any score/member ZADD, HSET or EXPIRE operations and any
conversation/message index updates, preserving the userId, conversationId and
context behavior; then execute pipeline.exec() and handle/report errors
similarly to addMessage so the batch is applied in one round trip.
- Around line 53-67: Add Redis connection event handling to RedisMemoryAdapter:
in the constructor after initializing this.client, attach listeners for 'error',
'reconnecting'/'connect'/'ready' and 'close' to update an internal connection
status field (e.g., this.connected or this.connectionState) and to re-emit or
surface errors via an EventEmitter stored on the adapter (e.g., this.events) so
callers can subscribe; update the class signature to include the connection
state field and an EventEmitter and ensure listeners use unique handler
functions so they can be removed in a future close() or dispose() method on
RedisMemoryAdapter (reference RedisMemoryAdapter, constructor, this.client, and
add new this.events/this.connectionState and a close()/dispose() method).
- Around line 375-398: queryWorkflowRuns does N+1 by calling
getWorkflowState(id) inside the loop; replace the per-id round trips with a
batched Redis pipeline/mget to fetch all workflow state entries in one call
(similar to getConversationsByIds). Build a pipeline for each id using the same
Redis key used by getWorkflowState (e.g. this.key("wf", id) or the underlying
storage command used by getWorkflowState), execute the pipeline,
parse/deserialise results into WorkflowStateEntry, then apply the existing
status/userId/from/to filters and offset/limit slicing; keep getWorkflowState
for single reads but implement the bulk fetch inside queryWorkflowRuns to avoid
the N Redis round trips.
- Around line 62-66: The conditional creating the Redis client is redundant:
both branches call new Redis(options.connection). Replace the if/else with a
single instantiation (assign this.client = new Redis(options.connection));
ensure the constructor or surrounding code accepts options.connection as either
string or config object and remove the unused conditional branch (symbols:
options.connection, this.client, Redis).
- Around line 237-243: Wrap the JSON.parse call inside the entries.map in a
defensive try-catch (or replace with a safeParse utility) so corrupted/manually
modified Redis payloads don't throw; on parse failure for an entry, skip that
entry (or return null) and filter out nulls afterward, and emit a warning via
the module's logger (or console) including the raw entry to aid debugging; apply
the same pattern to other parse sites (the other JSON.parse calls mentioned) so
functions like the entries.map that builds messages and the UIMessage<{
createdAt: Date }> construction are resilient to bad data.
In `@packages/redis/tsconfig.json`:
- Line 4: Remove browser-specific libs from the TypeScript config: update the
"lib" array in tsconfig.json (the "lib" property currently containing "dom" and
"dom.iterable") to only include server-appropriate libs such as "esnext" (or
"es2020"/similar) so the Redis adapter doesn't pull in DOM typings; edit the
"lib" entry accordingly in the tsconfig.json file where "lib":
["dom","dom.iterable","esnext"] is defined.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9afe0393-064a-4e5b-8c33-288d0884147f
📒 Files selected for processing (7)
packages/redis/package.jsonpackages/redis/src/index.tspackages/redis/src/memory-adapter.spec.tspackages/redis/src/memory-adapter.tspackages/redis/tsconfig.jsonpackages/redis/tsup.config.tspackages/redis/vitest.config.mts
| it("adds a message to the conversation sorted set", async () => { | ||
| await adapter.addMessage( | ||
| { id: "msg-1", role: "user", parts: [{ type: "text", text: "hello" }] } as UIMessage, | ||
| "user-1", | ||
| "conv-1", | ||
| ); | ||
|
|
||
| expect(mockRedis.zadd).toHaveBeenCalledWith( | ||
| "test:msgs:conv-1", | ||
| expect.any(Number), | ||
| expect.stringContaining("msg-1"), | ||
| ); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
Missing UIMessage type import.
UIMessage is used on line 175 but is not imported. This will cause a TypeScript compilation error.
🔧 Proposed fix
import { beforeEach, describe, expect, it, vi } from "vitest";
+import type { UIMessage } from "ai";
import { RedisMemoryAdapter } from "./memory-adapter";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/redis/src/memory-adapter.spec.ts` around lines 173 - 186, The test
uses the UIMessage type in the "adds a message to the conversation sorted set"
spec (the object passed to adapter.addMessage) but UIMessage isn't imported; add
a type import for UIMessage from the module that exports it (use an import type
{ UIMessage } ... at the top of memory-adapter.spec.ts) so the file compiles and
the adapter.addMessage call keeps its typed argument.
| async createConversation(input: CreateConversationInput): Promise<Conversation> { | ||
| const existing = await this.client.exists(this.key("conv", input.id)); | ||
| if (existing) { | ||
| throw new ConversationAlreadyExistsError(input.id); | ||
| } | ||
|
|
||
| const now = new Date().toISOString(); | ||
| const conversation: Conversation = { | ||
| id: input.id, | ||
| resourceId: input.resourceId, | ||
| userId: input.userId, | ||
| title: input.title, | ||
| metadata: input.metadata, | ||
| createdAt: now, | ||
| updatedAt: now, | ||
| }; | ||
|
|
||
| const pipeline = this.client.pipeline(); | ||
| pipeline.set(this.key("conv", input.id), safeStringify(conversation)); | ||
| pipeline.zadd(this.key("convs:resource", input.resourceId), Date.now(), input.id); | ||
| pipeline.zadd(this.key("convs:user", input.userId), Date.now(), input.id); | ||
| pipeline.zadd(this.key("convs:all"), Date.now(), input.id); | ||
| await pipeline.exec(); | ||
|
|
||
| this.log("createConversation", { id: input.id }); | ||
| return conversation; | ||
| } |
There was a problem hiding this comment.
TOCTOU race condition in createConversation.
The exists check (line 84) and subsequent pipeline.set (line 101) are not atomic. In a distributed environment with multiple adapter instances, two concurrent calls with the same conversation ID could both pass the existence check and create duplicate entries.
Consider using SET ... NX (set-if-not-exists) to atomically check and create:
🔒 Proposed fix using atomic SET NX
async createConversation(input: CreateConversationInput): Promise<Conversation> {
- const existing = await this.client.exists(this.key("conv", input.id));
- if (existing) {
- throw new ConversationAlreadyExistsError(input.id);
- }
-
const now = new Date().toISOString();
const conversation: Conversation = {
id: input.id,
resourceId: input.resourceId,
userId: input.userId,
title: input.title,
metadata: input.metadata,
createdAt: now,
updatedAt: now,
};
- const pipeline = this.client.pipeline();
- pipeline.set(this.key("conv", input.id), safeStringify(conversation));
+ // Atomically set only if key does not exist
+ const result = await this.client.set(
+ this.key("conv", input.id),
+ safeStringify(conversation),
+ "NX",
+ );
+
+ if (!result) {
+ throw new ConversationAlreadyExistsError(input.id);
+ }
+
+ // Index the conversation
+ const pipeline = this.client.pipeline();
pipeline.zadd(this.key("convs:resource", input.resourceId), Date.now(), input.id);
pipeline.zadd(this.key("convs:user", input.userId), Date.now(), input.id);
pipeline.zadd(this.key("convs:all"), Date.now(), input.id);
await pipeline.exec();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/redis/src/memory-adapter.ts` around lines 83 - 109, The
createConversation method has a TOCTOU race between the exists() check and
pipeline.set; remove the pre-check and perform an atomic "set if not exists"
instead (use Redis SET with NX or SETNX via this.client or pipeline) on
this.key("conv", input.id) and, if the set fails (returns null/0), throw
ConversationAlreadyExistsError(input.id); only on successful atomic set proceed
to add the zadd entries (this.key("convs:resource", ...), this.key("convs:user",
...), this.key("convs:all")) and exec the pipeline, keep the same logging and
return the conversation. Ensure you reference createConversation,
ConversationAlreadyExistsError, this.key("conv", ...), and the zadd calls when
making the change.
| async getWorkingMemory(params: { | ||
| conversationId?: string; | ||
| userId?: string; | ||
| scope: WorkingMemoryScope; | ||
| }): Promise<string | null> { | ||
| const scopeKey = | ||
| params.scope === "conversation" ? `conv:${params.conversationId}` : `user:${params.userId}`; | ||
| return this.client.get(this.key("wm", scopeKey)); | ||
| } |
There was a problem hiding this comment.
Add validation for working memory scope parameters.
When scope is "conversation" but conversationId is undefined (or "user" with undefined userId), the key becomes wm:conv:undefined or wm:user:undefined, which could cause data collisions or retrieval failures.
🛡️ Proposed validation
async getWorkingMemory(params: {
conversationId?: string;
userId?: string;
scope: WorkingMemoryScope;
}): Promise<string | null> {
+ if (params.scope === "conversation" && !params.conversationId) {
+ throw new Error("conversationId is required for conversation-scoped working memory");
+ }
+ if (params.scope === "user" && !params.userId) {
+ throw new Error("userId is required for user-scoped working memory");
+ }
const scopeKey =
params.scope === "conversation" ? `conv:${params.conversationId}` : `user:${params.userId}`;
return this.client.get(this.key("wm", scopeKey));
}Apply similar validation to setWorkingMemory and deleteWorkingMemory.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/redis/src/memory-adapter.ts` around lines 336 - 344,
getWorkingMemory currently builds keys like `conv:undefined` or `user:undefined`
when required IDs are missing; add explicit validation in getWorkingMemory (and
mirror the same checks in setWorkingMemory and deleteWorkingMemory) to ensure
when params.scope === "conversation" that params.conversationId is present
(non-empty) and when params.scope === "user" that params.userId is present; if
the required id is missing, either throw a clear Error (e.g., "Missing
conversationId for conversation scope") or return null and log appropriately,
and only call this.client.get / set / del with a valid key computed by
this.key("wm", scopeKey) after validation.
| async updateWorkflowState( | ||
| executionId: string, | ||
| updates: Partial<WorkflowStateEntry>, | ||
| ): Promise<void> { | ||
| const existing = await this.getWorkflowState(executionId); | ||
| if (!existing) return; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check how other adapters handle missing workflow state in updateWorkflowState
rg -n -A 5 "async updateWorkflowState" --type ts -g '!**/redis/**'Repository: VoltAgent/voltagent
Length of output: 2724
🏁 Script executed:
# Get full implementation of updateWorkflowState in in-memory adapter
sed -n '640,660p' packages/core/src/memory/adapters/storage/in-memory.tsRepository: VoltAgent/voltagent
Length of output: 566
🏁 Script executed:
# Get full implementation of updateWorkflowState in redis adapter
sed -n '411,431p' packages/redis/src/memory-adapter.tsRepository: VoltAgent/voltagent
Length of output: 799
🏁 Script executed:
# Get full implementation of updateConversation in redis adapter for comparison
rg -n -A 10 "async updateConversation" packages/redis/src/memory-adapter.tsRepository: VoltAgent/voltagent
Length of output: 438
🏁 Script executed:
# Check the StorageAdapter interface definition
rg -n -B 2 -A 8 "updateWorkflowState" packages/core/src/memory/adapters/storage/index.tsRepository: VoltAgent/voltagent
Length of output: 153
🏁 Script executed:
# Get full implementations from other adapters to compare
sed -n '1446,1465p' packages/supabase/src/memory-adapter.tsRepository: VoltAgent/voltagent
Length of output: 542
🏁 Script executed:
# Find the storage adapter interface definition
fd -type f -name "*.ts" | xargs grep -l "interface.*StorageAdapter\|class.*StorageAdapter" | head -5Repository: VoltAgent/voltagent
Length of output: 233
🏁 Script executed:
# Check postgres adapter implementation
sed -n '1469,1489p' packages/postgres/src/memory-adapter.tsRepository: VoltAgent/voltagent
Length of output: 547
🏁 Script executed:
# Check libsql adapter implementation
sed -n '1292,1312p' packages/libsql/src/memory-core.tsRepository: VoltAgent/voltagent
Length of output: 630
🏁 Script executed:
# Check cloudflare-d1 adapter implementation
sed -n '1516,1536p' packages/cloudflare-d1/src/memory-adapter.tsRepository: VoltAgent/voltagent
Length of output: 637
Fix inconsistent error handling in updateWorkflowState.
The Redis adapter returns silently when workflow state is not found, while all other adapters (in-memory, Supabase, Postgres, Libsql, Cloudflare-d1) throw an error. Additionally, the Redis adapter's updateConversation throws ConversationNotFoundError, making error handling inconsistent even within the same adapter. Throw an error when the workflow state is not found to match the pattern across all other storage adapters.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/redis/src/memory-adapter.ts` around lines 411 - 416, The
updateWorkflowState method currently returns silently when
getWorkflowState(executionId) returns null; change this to throw a not-found
error instead to match other adapters: in updateWorkflowState, after const
existing = await this.getWorkflowState(executionId); if (!existing) throw a
descriptive not-found error (e.g., reuse ConversationNotFoundError for
consistency with updateConversation or introduce/throw a
WorkflowStateNotFoundError) including the executionId in the message so callers
can handle missing workflow state the same way as other storage adapters.
Closes #18
Summary
Adds
@voltagent/redis— a newStorageAdapterbacked by Redis (viaioredis) for fast, in-memory persistence of agent memory.What it does
Implements the full
StorageAdapterinterface:Design decisions
ioredisoverredis(better TypeScript support, pipeline API, cluster-ready)safeStringifyfor all serialization (per project convention)voltagent)Changes
packages/redis/src/memory-adapter.tsStorageAdapterimplementationpackages/redis/src/index.tspackages/redis/src/memory-adapter.spec.tspackages/redis/package.jsonpackages/redis/tsup.config.tspackages/redis/tsconfig.jsonpackages/redis/vitest.config.mtsTest plan
pnpm --filter @voltagent/redis lint— cleanpnpm --filter @voltagent/redis build— cleanCo-Authored-By: Claude Opus 4.6 (1M context) [email protected]
Summary by cubic
Adds
@voltagent/redis, a Redis-backed memory storage adapter usingioredisfor fast persistence of conversations, messages, working memory, and workflow state. Uses sorted sets and pipeline batching for quick queries and multi-key operations.New Features
StorageAdapter: conversations with resource/user/global indexes, messages in timestamp-sorted sets (range queries), working memory (user or conversation), workflow runs with suspended tracking, and conversation steps.connection,keyPrefix(defaultvoltagent), anddebug; serialization viasafeStringify; cleandisconnect().Dependencies
@voltagent/rediswithioredis; includes exports, build/test configs, and 17 unit tests.Written for commit 2f6f6b1. Summary will update on new commits.
Summary by CodeRabbit
New Features
Tests