Skip to content

feat: add @voltagent/redis memory storage adapter#1174

Open
howwohmm wants to merge 1 commit intoVoltAgent:mainfrom
howwohmm:feat/redis-memory-adapter
Open

feat: add @voltagent/redis memory storage adapter#1174
howwohmm wants to merge 1 commit intoVoltAgent:mainfrom
howwohmm:feat/redis-memory-adapter

Conversation

@howwohmm
Copy link

@howwohmm howwohmm commented Mar 22, 2026

Closes #18

Summary

Adds @voltagent/redis — a new StorageAdapter backed by Redis (via ioredis) for fast, in-memory persistence of agent memory.

What it does

import { RedisMemoryAdapter } from "@voltagent/redis";

const memory = new RedisMemoryAdapter({
  connection: "redis://localhost:6379",
  keyPrefix: "voltagent",
  debug: false,
});

Implements the full StorageAdapter interface:

  • Conversations — CRUD with sorted set indexes by resource, user, and global
  • Messages — stored in sorted sets ordered by timestamp (efficient range queries)
  • Working memory — simple key-value, scoped to conversation or user
  • Workflow state — full lifecycle with suspended state tracking via Redis sets
  • Conversation steps — ordered by timestamp in sorted sets

Design decisions

  • ioredis over redis (better TypeScript support, pipeline API, cluster-ready)
  • Sorted sets for messages/steps (natural time ordering, range queries)
  • Pipeline batching for atomic multi-key operations
  • safeStringify for all serialization (per project convention)
  • Key prefix configurable (default: voltagent)

Changes

File What
packages/redis/src/memory-adapter.ts Full StorageAdapter implementation
packages/redis/src/index.ts Package exports
packages/redis/src/memory-adapter.spec.ts 17 unit tests
packages/redis/package.json Package config, ioredis dependency
packages/redis/tsup.config.ts Build config
packages/redis/tsconfig.json TypeScript config
packages/redis/vitest.config.mts Test config

Test plan

  • 17 unit tests pass (mocked Redis client)
  • pnpm --filter @voltagent/redis lint — clean
  • pnpm --filter @voltagent/redis build — clean
  • Integration test with real Redis instance (can add docker-compose like postgres package)

Co-Authored-By: Claude Opus 4.6 (1M context) [email protected]


Summary by cubic

Adds @voltagent/redis, a Redis-backed memory storage adapter using ioredis for fast persistence of conversations, messages, working memory, and workflow state. Uses sorted sets and pipeline batching for quick queries and multi-key operations.

  • New Features

    • Full StorageAdapter: conversations with resource/user/global indexes, messages in timestamp-sorted sets (range queries), working memory (user or conversation), workflow runs with suspended tracking, and conversation steps.
    • Configurable connection, keyPrefix (default voltagent), and debug; serialization via safeStringify; clean disconnect().
  • Dependencies

    • New package @voltagent/redis with ioredis; includes exports, build/test configs, and 17 unit tests.

Written for commit 2f6f6b1. Summary will update on new commits.

Summary by CodeRabbit

  • New Features

    • Introduced a new Redis storage adapter for VoltAgent, enabling Redis-backed persistence for conversations, messages, workflow states, and working memory with support for filtering, sorting, pagination, and full CRUD operations.
  • Tests

    • Added comprehensive test suite validating conversation lifecycle, message management, workflow states, and memory operations.

Implements `@voltagent/redis` — a new StorageAdapter backed by Redis
(via ioredis) for fast, in-memory persistence of agent conversations,
messages, working memory, and workflow state.

Key design decisions:
- Conversations stored as JSON strings keyed by `{prefix}:conv:{id}`
- Messages in sorted sets ordered by timestamp for efficient range queries
- Working memory as simple key-value (conversation or user scoped)
- Workflow state indexed by workflow ID + global sorted set
- Suspended workflows tracked via Redis sets for fast lookup
- All serialization uses safeStringify (never JSON.stringify)
- Pipeline batching for atomic multi-key operations

New files:
- packages/redis/src/memory-adapter.ts — full StorageAdapter implementation
- packages/redis/src/index.ts — exports
- packages/redis/src/memory-adapter.spec.ts — 17 unit tests
- packages/redis/package.json, tsup.config.ts, tsconfig.json, vitest.config.mts

Closes VoltAgent#18

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
@changeset-bot
Copy link

changeset-bot bot commented Mar 22, 2026

⚠️ No Changeset found

Latest commit: 2f6f6b1

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 22, 2026

📝 Walkthrough

Walkthrough

Introduces a new @voltagent/redis package implementing a Redis-backed storage adapter for VoltAgent memory operations. The adapter supports CRUD operations for conversations, messages, conversation steps, working memory, and workflow state, with full TypeScript support, build configuration, and comprehensive test coverage.

Changes

Cohort / File(s) Summary
Package Configuration
packages/redis/package.json, packages/redis/tsconfig.json, packages/redis/tsup.config.ts, packages/redis/vitest.config.mts
Build, test, and package configuration files for the Redis adapter package with ESM/CJS dual exports, TypeScript strict mode, and Vitest coverage setup.
Adapter Implementation
packages/redis/src/memory-adapter.ts
Redis-backed StorageAdapter implementation with configuration options (RedisMemoryOptions) and comprehensive methods for managing conversations, messages, steps, working memory, and workflow state using Redis sorted sets, hashes, and string keys.
Module Exports
packages/redis/src/index.ts
Re-exports RedisMemoryAdapter and RedisMemoryOptions as the package's public API surface.
Test Suite
packages/redis/src/memory-adapter.spec.ts
Vitest-based unit tests covering conversation lifecycle, message operations, working memory management, workflow state handling, and client disconnection with mocked Redis client and pipeline methods.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 A Redis cache to speed the way,
Where memories dance and agents play,
Conversations bloom in sorted sets so bright,
Working memory flows at lightning's flight,
Fast retrieval, smooth and clean—
The quickest storage ever seen! 🚀

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main change: adding a Redis-backed memory storage adapter for VoltAgent.
Description check ✅ Passed The description includes related issue link, comprehensive summary, detailed implementation overview, design decisions, file changes, and test plan status.
Linked Issues check ✅ Passed The PR fully implements issue #18 requirements: RedisMemoryAdapter class using ioredis, StorageAdapter interface with CRUD for conversations/messages/workflow/steps, configurable connection/keyPrefix, serialization via safeStringify, and 17 unit tests.
Out of Scope Changes check ✅ Passed All changes are scoped to the new @voltagent/redis package and directly support the Redis adapter implementation objective; no unrelated modifications detected.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

CodeRabbit can use TruffleHog to scan for secrets in your code with verification capabilities.

Add a TruffleHog config file (e.g. trufflehog-config.yml, trufflehog.yml) to your project to customize detectors and scanning behavior. The tool runs only when a config file is present.

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

7 issues found across 7 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="packages/redis/src/memory-adapter.ts">

<violation number="1" location="packages/redis/src/memory-adapter.ts:84">
P2: Conversation creation uses non-atomic EXISTS-then-SET logic, so duplicate concurrent creates for the same id can both succeed instead of one throwing `ConversationAlreadyExistsError`.</violation>

<violation number="2" location="packages/redis/src/memory-adapter.ts:105">
P1: Multi-key Redis write batches ignore `pipeline.exec()` command results, so partial failures can silently leave records and indexes inconsistent.</violation>

<violation number="3" location="packages/redis/src/memory-adapter.ts:342">
P1: Working-memory Redis keys are built from optional IDs without validation, allowing `undefined`-scoped keys that can collide across callers.</violation>

<violation number="4" location="packages/redis/src/memory-adapter.ts:385">
P2: `queryWorkflowRuns` performs an unbounded full scan with sequential per-ID Redis reads, applying filters/pagination only after fetching everything, which can degrade badly at scale.</violation>

<violation number="5" location="packages/redis/src/memory-adapter.ts:418">
P1: `updateWorkflowState` can desynchronize Redis workflow indexes by allowing updates to index-driving fields (`workflowId`, `createdAt`) without reindexing.</violation>
</file>

<file name="packages/redis/src/memory-adapter.spec.ts">

<violation number="1" location="packages/redis/src/memory-adapter.spec.ts:93">
P2: Error-contract tests are too broad: `rejects.toThrow()` does not enforce the specific error type the test names claim to validate.</violation>

<violation number="2" location="packages/redis/src/memory-adapter.spec.ts:166">
P2: `deleteConversation` test does not assert `pipeline.exec()`, so it can pass even if queued Redis mutations are never executed.</violation>
</file>

Since this is your first cubic review, here's how it works:

  • cubic automatically reviews your code and comments on bugs and improvements
  • Teach cubic by replying to its comments. cubic learns from your replies and gets better over time
  • Add one-off context when rerunning by tagging @cubic-dev-ai with guidance or docs links (including llms.txt)
  • Ask questions if you need clarification on any suggestion

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

const existing = await this.getWorkflowState(executionId);
if (!existing) return;

const updated = { ...existing, ...updates, updatedAt: new Date() };
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: updateWorkflowState can desynchronize Redis workflow indexes by allowing updates to index-driving fields (workflowId, createdAt) without reindexing.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/redis/src/memory-adapter.ts, line 418:

<comment>`updateWorkflowState` can desynchronize Redis workflow indexes by allowing updates to index-driving fields (`workflowId`, `createdAt`) without reindexing.</comment>

<file context>
@@ -0,0 +1,524 @@
+    const existing = await this.getWorkflowState(executionId);
+    if (!existing) return;
+
+    const updated = { ...existing, ...updates, updatedAt: new Date() };
+
+    const pipeline = this.client.pipeline();
</file context>
Fix with Cubic

pipeline.zadd(this.key("convs:resource", input.resourceId), Date.now(), input.id);
pipeline.zadd(this.key("convs:user", input.userId), Date.now(), input.id);
pipeline.zadd(this.key("convs:all"), Date.now(), input.id);
await pipeline.exec();
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: Multi-key Redis write batches ignore pipeline.exec() command results, so partial failures can silently leave records and indexes inconsistent.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/redis/src/memory-adapter.ts, line 105:

<comment>Multi-key Redis write batches ignore `pipeline.exec()` command results, so partial failures can silently leave records and indexes inconsistent.</comment>

<file context>
@@ -0,0 +1,524 @@
+    pipeline.zadd(this.key("convs:resource", input.resourceId), Date.now(), input.id);
+    pipeline.zadd(this.key("convs:user", input.userId), Date.now(), input.id);
+    pipeline.zadd(this.key("convs:all"), Date.now(), input.id);
+    await pipeline.exec();
+
+    this.log("createConversation", { id: input.id });
</file context>
Fix with Cubic

@@ -0,0 +1,524 @@
/**
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: Working-memory Redis keys are built from optional IDs without validation, allowing undefined-scoped keys that can collide across callers.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/redis/src/memory-adapter.ts, line 342:

<comment>Working-memory Redis keys are built from optional IDs without validation, allowing `undefined`-scoped keys that can collide across callers.</comment>

<file context>
@@ -0,0 +1,524 @@
+    scope: WorkingMemoryScope;
+  }): Promise<string | null> {
+    const scopeKey =
+      params.scope === "conversation" ? `conv:${params.conversationId}` : `user:${params.userId}`;
+    return this.client.get(this.key("wm", scopeKey));
+  }
</file context>
Fix with Cubic

}

const results: WorkflowStateEntry[] = [];
for (const id of ids) {
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: queryWorkflowRuns performs an unbounded full scan with sequential per-ID Redis reads, applying filters/pagination only after fetching everything, which can degrade badly at scale.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/redis/src/memory-adapter.ts, line 385:

<comment>`queryWorkflowRuns` performs an unbounded full scan with sequential per-ID Redis reads, applying filters/pagination only after fetching everything, which can degrade badly at scale.</comment>

<file context>
@@ -0,0 +1,524 @@
+    }
+
+    const results: WorkflowStateEntry[] = [];
+    for (const id of ids) {
+      const state = await this.getWorkflowState(id);
+      if (!state) continue;
</file context>
Fix with Cubic

// ── Conversation operations ──────────────────────────────────────────

async createConversation(input: CreateConversationInput): Promise<Conversation> {
const existing = await this.client.exists(this.key("conv", input.id));
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Conversation creation uses non-atomic EXISTS-then-SET logic, so duplicate concurrent creates for the same id can both succeed instead of one throwing ConversationAlreadyExistsError.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/redis/src/memory-adapter.ts, line 84:

<comment>Conversation creation uses non-atomic EXISTS-then-SET logic, so duplicate concurrent creates for the same id can both succeed instead of one throwing `ConversationAlreadyExistsError`.</comment>

<file context>
@@ -0,0 +1,524 @@
+  // ── Conversation operations ──────────────────────────────────────────
+
+  async createConversation(input: CreateConversationInput): Promise<Conversation> {
+    const existing = await this.client.exists(this.key("conv", input.id));
+    if (existing) {
+      throw new ConversationAlreadyExistsError(input.id);
</file context>
Fix with Cubic

expect(mockPipeline.del).toHaveBeenCalledWith("test:msgs:conv-1");
expect(mockPipeline.del).toHaveBeenCalledWith("test:steps:conv-1");
expect(mockPipeline.zrem).toHaveBeenCalledWith("test:convs:resource:agent-1", "conv-1");
expect(mockPipeline.zrem).toHaveBeenCalledWith("test:convs:user:user-1", "conv-1");
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: deleteConversation test does not assert pipeline.exec(), so it can pass even if queued Redis mutations are never executed.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/redis/src/memory-adapter.spec.ts, line 166:

<comment>`deleteConversation` test does not assert `pipeline.exec()`, so it can pass even if queued Redis mutations are never executed.</comment>

<file context>
@@ -0,0 +1,304 @@
+      expect(mockPipeline.del).toHaveBeenCalledWith("test:msgs:conv-1");
+      expect(mockPipeline.del).toHaveBeenCalledWith("test:steps:conv-1");
+      expect(mockPipeline.zrem).toHaveBeenCalledWith("test:convs:resource:agent-1", "conv-1");
+      expect(mockPipeline.zrem).toHaveBeenCalledWith("test:convs:user:user-1", "conv-1");
+    });
+  });
</file context>
Fix with Cubic

title: "Test",
metadata: {},
}),
).rejects.toThrow();
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Error-contract tests are too broad: rejects.toThrow() does not enforce the specific error type the test names claim to validate.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/redis/src/memory-adapter.spec.ts, line 93:

<comment>Error-contract tests are too broad: `rejects.toThrow()` does not enforce the specific error type the test names claim to validate.</comment>

<file context>
@@ -0,0 +1,304 @@
+          title: "Test",
+          metadata: {},
+        }),
+      ).rejects.toThrow();
+    });
+  });
</file context>
Fix with Cubic

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (7)
packages/redis/src/memory-adapter.ts (5)

210-219: Use pipeline batching for addMessages.

The sequential await in the loop causes N round trips to Redis. Use a pipeline for better performance, consistent with other batch operations in this adapter.

♻️ Proposed batched implementation
   async addMessages(
     messages: UIMessage[],
     userId: string,
     conversationId: string,
-    context?: OperationContext,
+    _context?: OperationContext,
   ): Promise<void> {
-    for (const message of messages) {
-      await this.addMessage(message, userId, conversationId, context);
+    if (messages.length === 0) return;
+
+    const pipeline = this.client.pipeline();
+    for (const message of messages) {
+      const createdAt = (message as UIMessage & { createdAt?: Date }).createdAt ?? new Date();
+      const entry = safeStringify({
+        ...message,
+        userId,
+        conversationId,
+        createdAt: createdAt.toISOString(),
+      });
+      pipeline.zadd(this.key("msgs", conversationId), createdAt.getTime(), entry);
     }
+    await pipeline.exec();
+
+    this.log("addMessages", { count: messages.length, conversationId });
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/redis/src/memory-adapter.ts` around lines 210 - 219, The current
addMessages method awaits addMessage in a loop causing N Redis round-trips;
change it to build and execute a single Redis pipeline: create a pipeline from
the Redis client, for each UIMessage enqueue the same Redis commands that
addMessage performs (or factor out the low-level Redis command sequence into a
helper and call it to populate the pipeline) including any score/member ZADD,
HSET or EXPIRE operations and any conversation/message index updates, preserving
the userId, conversationId and context behavior; then execute pipeline.exec()
and handle/report errors similarly to addMessage so the batch is applied in one
round trip.

53-67: Consider handling Redis connection events.

The adapter doesn't handle Redis connection lifecycle events (error, reconnect, close). For production use, consider exposing error events or adding connection status tracking.

Would you like me to suggest an implementation with connection event handling?

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/redis/src/memory-adapter.ts` around lines 53 - 67, Add Redis
connection event handling to RedisMemoryAdapter: in the constructor after
initializing this.client, attach listeners for 'error',
'reconnecting'/'connect'/'ready' and 'close' to update an internal connection
status field (e.g., this.connected or this.connectionState) and to re-emit or
surface errors via an EventEmitter stored on the adapter (e.g., this.events) so
callers can subscribe; update the class signature to include the connection
state field and an EventEmitter and ensure listeners use unique handler
functions so they can be removed in a future close() or dispose() method on
RedisMemoryAdapter (reference RedisMemoryAdapter, constructor, this.client, and
add new this.events/this.connectionState and a close()/dispose() method).

375-398: N+1 query pattern in queryWorkflowRuns.

The loop fetches each workflow state individually (line 386), causing N Redis round trips after fetching the ID list. Consider batching the GET operations using a pipeline, similar to getConversationsByIds.

♻️ Proposed batched implementation
   async queryWorkflowRuns(query: WorkflowRunQuery): Promise<WorkflowStateEntry[]> {
     let ids: string[];
     if (query.workflowId) {
       ids = await this.client.zrevrange(this.key("wf:idx", query.workflowId), 0, -1);
     } else {
       ids = await this.client.zrevrange(this.key("wf:all"), 0, -1);
     }
 
+    if (ids.length === 0) return [];
+
+    // Batch fetch all workflow states
+    const pipeline = this.client.pipeline();
+    for (const id of ids) {
+      pipeline.get(this.key("wf", id));
+    }
+    const pipelineResults = await pipeline.exec();
+
     const results: WorkflowStateEntry[] = [];
-    for (const id of ids) {
-      const state = await this.getWorkflowState(id);
+    for (const [err, data] of pipelineResults ?? []) {
+      if (err || !data) continue;
+      const state = this.deserializeWorkflowState(JSON.parse(data as string));
       if (!state) continue;
       if (query.status && state.status !== query.status) continue;
       if (query.userId && state.userId !== query.userId) continue;
       if (query.from && state.createdAt < query.from) continue;
       if (query.to && state.createdAt > query.to) continue;
       results.push(state);
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/redis/src/memory-adapter.ts` around lines 375 - 398,
queryWorkflowRuns does N+1 by calling getWorkflowState(id) inside the loop;
replace the per-id round trips with a batched Redis pipeline/mget to fetch all
workflow state entries in one call (similar to getConversationsByIds). Build a
pipeline for each id using the same Redis key used by getWorkflowState (e.g.
this.key("wf", id) or the underlying storage command used by getWorkflowState),
execute the pipeline, parse/deserialise results into WorkflowStateEntry, then
apply the existing status/userId/from/to filters and offset/limit slicing; keep
getWorkflowState for single reads but implement the bulk fetch inside
queryWorkflowRuns to avoid the N Redis round trips.

62-66: Simplify redundant connection handling.

Both branches create a Redis instance identically. The Redis constructor from ioredis accepts either a string URL or an options object, so the conditional is unnecessary.

♻️ Proposed simplification
-    if (typeof options.connection === "string") {
-      this.client = new Redis(options.connection);
-    } else {
-      this.client = new Redis(options.connection);
-    }
+    this.client = new Redis(options.connection);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/redis/src/memory-adapter.ts` around lines 62 - 66, The conditional
creating the Redis client is redundant: both branches call new
Redis(options.connection). Replace the if/else with a single instantiation
(assign this.client = new Redis(options.connection)); ensure the constructor or
surrounding code accepts options.connection as either string or config object
and remove the unused conditional branch (symbols: options.connection,
this.client, Redis).

237-243: Consider defensive JSON parsing.

Multiple JSON.parse calls (lines 114, 238, 290, 321, 372, 469) lack try-catch. While data is written via safeStringify, corrupted or manually modified Redis data could cause unhandled exceptions. Consider wrapping in try-catch or using a safe parse utility for resilience.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/redis/src/memory-adapter.ts` around lines 237 - 243, Wrap the
JSON.parse call inside the entries.map in a defensive try-catch (or replace with
a safeParse utility) so corrupted/manually modified Redis payloads don't throw;
on parse failure for an entry, skip that entry (or return null) and filter out
nulls afterward, and emit a warning via the module's logger (or console)
including the raw entry to aid debugging; apply the same pattern to other parse
sites (the other JSON.parse calls mentioned) so functions like the entries.map
that builds messages and the UIMessage<{ createdAt: Date }> construction are
resilient to bad data.
packages/redis/tsconfig.json (1)

4-4: Consider removing dom from lib for a server-side package.

The dom and dom.iterable libraries are typically unnecessary for a Redis adapter that runs exclusively in Node.js environments. Consider simplifying to just ["esnext"].

♻️ Suggested simplification
-    "lib": ["dom", "dom.iterable", "esnext"],
+    "lib": ["esnext"],
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/redis/tsconfig.json` at line 4, Remove browser-specific libs from
the TypeScript config: update the "lib" array in tsconfig.json (the "lib"
property currently containing "dom" and "dom.iterable") to only include
server-appropriate libs such as "esnext" (or "es2020"/similar) so the Redis
adapter doesn't pull in DOM typings; edit the "lib" entry accordingly in the
tsconfig.json file where "lib": ["dom","dom.iterable","esnext"] is defined.
packages/redis/package.json (1)

9-13: Consider adding vitest as an explicit devDependency.

The vitest.config.mts uses Vitest APIs, but vitest itself is not listed as a devDependency—only @vitest/coverage-v8 is included. If the workspace provides Vitest at the root, this works, but adding it explicitly improves package portability and makes dependencies clearer.

♻️ Suggested addition
   "devDependencies": {
     "@vitest/coverage-v8": "^3.2.4",
     "@voltagent/core": "^2.4.4",
-    "ai": "^6.0.0"
+    "ai": "^6.0.0",
+    "vitest": "^3.2.4"
   },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/redis/package.json` around lines 9 - 13, Add "vitest" as an explicit
devDependency in the package.json devDependencies block (alongside
"@vitest/coverage-v8"), so the package doesn't rely solely on a workspace root
for Vitest; pick a compatible version (e.g. "^1" or match the workspace root)
and update the "devDependencies" object to include "vitest": "<version>" to
ensure vitest.config.mts and any Vitest APIs resolve locally for this package.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/redis/src/memory-adapter.spec.ts`:
- Around line 173-186: The test uses the UIMessage type in the "adds a message
to the conversation sorted set" spec (the object passed to adapter.addMessage)
but UIMessage isn't imported; add a type import for UIMessage from the module
that exports it (use an import type { UIMessage } ... at the top of
memory-adapter.spec.ts) so the file compiles and the adapter.addMessage call
keeps its typed argument.

In `@packages/redis/src/memory-adapter.ts`:
- Around line 336-344: getWorkingMemory currently builds keys like
`conv:undefined` or `user:undefined` when required IDs are missing; add explicit
validation in getWorkingMemory (and mirror the same checks in setWorkingMemory
and deleteWorkingMemory) to ensure when params.scope === "conversation" that
params.conversationId is present (non-empty) and when params.scope === "user"
that params.userId is present; if the required id is missing, either throw a
clear Error (e.g., "Missing conversationId for conversation scope") or return
null and log appropriately, and only call this.client.get / set / del with a
valid key computed by this.key("wm", scopeKey) after validation.
- Around line 83-109: The createConversation method has a TOCTOU race between
the exists() check and pipeline.set; remove the pre-check and perform an atomic
"set if not exists" instead (use Redis SET with NX or SETNX via this.client or
pipeline) on this.key("conv", input.id) and, if the set fails (returns null/0),
throw ConversationAlreadyExistsError(input.id); only on successful atomic set
proceed to add the zadd entries (this.key("convs:resource", ...),
this.key("convs:user", ...), this.key("convs:all")) and exec the pipeline, keep
the same logging and return the conversation. Ensure you reference
createConversation, ConversationAlreadyExistsError, this.key("conv", ...), and
the zadd calls when making the change.
- Around line 411-416: The updateWorkflowState method currently returns silently
when getWorkflowState(executionId) returns null; change this to throw a
not-found error instead to match other adapters: in updateWorkflowState, after
const existing = await this.getWorkflowState(executionId); if (!existing) throw
a descriptive not-found error (e.g., reuse ConversationNotFoundError for
consistency with updateConversation or introduce/throw a
WorkflowStateNotFoundError) including the executionId in the message so callers
can handle missing workflow state the same way as other storage adapters.

---

Nitpick comments:
In `@packages/redis/package.json`:
- Around line 9-13: Add "vitest" as an explicit devDependency in the
package.json devDependencies block (alongside "@vitest/coverage-v8"), so the
package doesn't rely solely on a workspace root for Vitest; pick a compatible
version (e.g. "^1" or match the workspace root) and update the "devDependencies"
object to include "vitest": "<version>" to ensure vitest.config.mts and any
Vitest APIs resolve locally for this package.

In `@packages/redis/src/memory-adapter.ts`:
- Around line 210-219: The current addMessages method awaits addMessage in a
loop causing N Redis round-trips; change it to build and execute a single Redis
pipeline: create a pipeline from the Redis client, for each UIMessage enqueue
the same Redis commands that addMessage performs (or factor out the low-level
Redis command sequence into a helper and call it to populate the pipeline)
including any score/member ZADD, HSET or EXPIRE operations and any
conversation/message index updates, preserving the userId, conversationId and
context behavior; then execute pipeline.exec() and handle/report errors
similarly to addMessage so the batch is applied in one round trip.
- Around line 53-67: Add Redis connection event handling to RedisMemoryAdapter:
in the constructor after initializing this.client, attach listeners for 'error',
'reconnecting'/'connect'/'ready' and 'close' to update an internal connection
status field (e.g., this.connected or this.connectionState) and to re-emit or
surface errors via an EventEmitter stored on the adapter (e.g., this.events) so
callers can subscribe; update the class signature to include the connection
state field and an EventEmitter and ensure listeners use unique handler
functions so they can be removed in a future close() or dispose() method on
RedisMemoryAdapter (reference RedisMemoryAdapter, constructor, this.client, and
add new this.events/this.connectionState and a close()/dispose() method).
- Around line 375-398: queryWorkflowRuns does N+1 by calling
getWorkflowState(id) inside the loop; replace the per-id round trips with a
batched Redis pipeline/mget to fetch all workflow state entries in one call
(similar to getConversationsByIds). Build a pipeline for each id using the same
Redis key used by getWorkflowState (e.g. this.key("wf", id) or the underlying
storage command used by getWorkflowState), execute the pipeline,
parse/deserialise results into WorkflowStateEntry, then apply the existing
status/userId/from/to filters and offset/limit slicing; keep getWorkflowState
for single reads but implement the bulk fetch inside queryWorkflowRuns to avoid
the N Redis round trips.
- Around line 62-66: The conditional creating the Redis client is redundant:
both branches call new Redis(options.connection). Replace the if/else with a
single instantiation (assign this.client = new Redis(options.connection));
ensure the constructor or surrounding code accepts options.connection as either
string or config object and remove the unused conditional branch (symbols:
options.connection, this.client, Redis).
- Around line 237-243: Wrap the JSON.parse call inside the entries.map in a
defensive try-catch (or replace with a safeParse utility) so corrupted/manually
modified Redis payloads don't throw; on parse failure for an entry, skip that
entry (or return null) and filter out nulls afterward, and emit a warning via
the module's logger (or console) including the raw entry to aid debugging; apply
the same pattern to other parse sites (the other JSON.parse calls mentioned) so
functions like the entries.map that builds messages and the UIMessage<{
createdAt: Date }> construction are resilient to bad data.

In `@packages/redis/tsconfig.json`:
- Line 4: Remove browser-specific libs from the TypeScript config: update the
"lib" array in tsconfig.json (the "lib" property currently containing "dom" and
"dom.iterable") to only include server-appropriate libs such as "esnext" (or
"es2020"/similar) so the Redis adapter doesn't pull in DOM typings; edit the
"lib" entry accordingly in the tsconfig.json file where "lib":
["dom","dom.iterable","esnext"] is defined.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 9afe0393-064a-4e5b-8c33-288d0884147f

📥 Commits

Reviewing files that changed from the base of the PR and between 98c6649 and 2f6f6b1.

📒 Files selected for processing (7)
  • packages/redis/package.json
  • packages/redis/src/index.ts
  • packages/redis/src/memory-adapter.spec.ts
  • packages/redis/src/memory-adapter.ts
  • packages/redis/tsconfig.json
  • packages/redis/tsup.config.ts
  • packages/redis/vitest.config.mts

Comment on lines +173 to +186
it("adds a message to the conversation sorted set", async () => {
await adapter.addMessage(
{ id: "msg-1", role: "user", parts: [{ type: "text", text: "hello" }] } as UIMessage,
"user-1",
"conv-1",
);

expect(mockRedis.zadd).toHaveBeenCalledWith(
"test:msgs:conv-1",
expect.any(Number),
expect.stringContaining("msg-1"),
);
});
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing UIMessage type import.

UIMessage is used on line 175 but is not imported. This will cause a TypeScript compilation error.

🔧 Proposed fix
 import { beforeEach, describe, expect, it, vi } from "vitest";
+import type { UIMessage } from "ai";
 import { RedisMemoryAdapter } from "./memory-adapter";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/redis/src/memory-adapter.spec.ts` around lines 173 - 186, The test
uses the UIMessage type in the "adds a message to the conversation sorted set"
spec (the object passed to adapter.addMessage) but UIMessage isn't imported; add
a type import for UIMessage from the module that exports it (use an import type
{ UIMessage } ... at the top of memory-adapter.spec.ts) so the file compiles and
the adapter.addMessage call keeps its typed argument.

Comment on lines +83 to +109
async createConversation(input: CreateConversationInput): Promise<Conversation> {
const existing = await this.client.exists(this.key("conv", input.id));
if (existing) {
throw new ConversationAlreadyExistsError(input.id);
}

const now = new Date().toISOString();
const conversation: Conversation = {
id: input.id,
resourceId: input.resourceId,
userId: input.userId,
title: input.title,
metadata: input.metadata,
createdAt: now,
updatedAt: now,
};

const pipeline = this.client.pipeline();
pipeline.set(this.key("conv", input.id), safeStringify(conversation));
pipeline.zadd(this.key("convs:resource", input.resourceId), Date.now(), input.id);
pipeline.zadd(this.key("convs:user", input.userId), Date.now(), input.id);
pipeline.zadd(this.key("convs:all"), Date.now(), input.id);
await pipeline.exec();

this.log("createConversation", { id: input.id });
return conversation;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

TOCTOU race condition in createConversation.

The exists check (line 84) and subsequent pipeline.set (line 101) are not atomic. In a distributed environment with multiple adapter instances, two concurrent calls with the same conversation ID could both pass the existence check and create duplicate entries.

Consider using SET ... NX (set-if-not-exists) to atomically check and create:

🔒 Proposed fix using atomic SET NX
   async createConversation(input: CreateConversationInput): Promise<Conversation> {
-    const existing = await this.client.exists(this.key("conv", input.id));
-    if (existing) {
-      throw new ConversationAlreadyExistsError(input.id);
-    }
-
     const now = new Date().toISOString();
     const conversation: Conversation = {
       id: input.id,
       resourceId: input.resourceId,
       userId: input.userId,
       title: input.title,
       metadata: input.metadata,
       createdAt: now,
       updatedAt: now,
     };
 
-    const pipeline = this.client.pipeline();
-    pipeline.set(this.key("conv", input.id), safeStringify(conversation));
+    // Atomically set only if key does not exist
+    const result = await this.client.set(
+      this.key("conv", input.id),
+      safeStringify(conversation),
+      "NX",
+    );
+
+    if (!result) {
+      throw new ConversationAlreadyExistsError(input.id);
+    }
+
+    // Index the conversation
+    const pipeline = this.client.pipeline();
     pipeline.zadd(this.key("convs:resource", input.resourceId), Date.now(), input.id);
     pipeline.zadd(this.key("convs:user", input.userId), Date.now(), input.id);
     pipeline.zadd(this.key("convs:all"), Date.now(), input.id);
     await pipeline.exec();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/redis/src/memory-adapter.ts` around lines 83 - 109, The
createConversation method has a TOCTOU race between the exists() check and
pipeline.set; remove the pre-check and perform an atomic "set if not exists"
instead (use Redis SET with NX or SETNX via this.client or pipeline) on
this.key("conv", input.id) and, if the set fails (returns null/0), throw
ConversationAlreadyExistsError(input.id); only on successful atomic set proceed
to add the zadd entries (this.key("convs:resource", ...), this.key("convs:user",
...), this.key("convs:all")) and exec the pipeline, keep the same logging and
return the conversation. Ensure you reference createConversation,
ConversationAlreadyExistsError, this.key("conv", ...), and the zadd calls when
making the change.

Comment on lines +336 to +344
async getWorkingMemory(params: {
conversationId?: string;
userId?: string;
scope: WorkingMemoryScope;
}): Promise<string | null> {
const scopeKey =
params.scope === "conversation" ? `conv:${params.conversationId}` : `user:${params.userId}`;
return this.client.get(this.key("wm", scopeKey));
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add validation for working memory scope parameters.

When scope is "conversation" but conversationId is undefined (or "user" with undefined userId), the key becomes wm:conv:undefined or wm:user:undefined, which could cause data collisions or retrieval failures.

🛡️ Proposed validation
   async getWorkingMemory(params: {
     conversationId?: string;
     userId?: string;
     scope: WorkingMemoryScope;
   }): Promise<string | null> {
+    if (params.scope === "conversation" && !params.conversationId) {
+      throw new Error("conversationId is required for conversation-scoped working memory");
+    }
+    if (params.scope === "user" && !params.userId) {
+      throw new Error("userId is required for user-scoped working memory");
+    }
     const scopeKey =
       params.scope === "conversation" ? `conv:${params.conversationId}` : `user:${params.userId}`;
     return this.client.get(this.key("wm", scopeKey));
   }

Apply similar validation to setWorkingMemory and deleteWorkingMemory.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/redis/src/memory-adapter.ts` around lines 336 - 344,
getWorkingMemory currently builds keys like `conv:undefined` or `user:undefined`
when required IDs are missing; add explicit validation in getWorkingMemory (and
mirror the same checks in setWorkingMemory and deleteWorkingMemory) to ensure
when params.scope === "conversation" that params.conversationId is present
(non-empty) and when params.scope === "user" that params.userId is present; if
the required id is missing, either throw a clear Error (e.g., "Missing
conversationId for conversation scope") or return null and log appropriately,
and only call this.client.get / set / del with a valid key computed by
this.key("wm", scopeKey) after validation.

Comment on lines +411 to +416
async updateWorkflowState(
executionId: string,
updates: Partial<WorkflowStateEntry>,
): Promise<void> {
const existing = await this.getWorkflowState(executionId);
if (!existing) return;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check how other adapters handle missing workflow state in updateWorkflowState
rg -n -A 5 "async updateWorkflowState" --type ts -g '!**/redis/**'

Repository: VoltAgent/voltagent

Length of output: 2724


🏁 Script executed:

# Get full implementation of updateWorkflowState in in-memory adapter
sed -n '640,660p' packages/core/src/memory/adapters/storage/in-memory.ts

Repository: VoltAgent/voltagent

Length of output: 566


🏁 Script executed:

# Get full implementation of updateWorkflowState in redis adapter
sed -n '411,431p' packages/redis/src/memory-adapter.ts

Repository: VoltAgent/voltagent

Length of output: 799


🏁 Script executed:

# Get full implementation of updateConversation in redis adapter for comparison
rg -n -A 10 "async updateConversation" packages/redis/src/memory-adapter.ts

Repository: VoltAgent/voltagent

Length of output: 438


🏁 Script executed:

# Check the StorageAdapter interface definition
rg -n -B 2 -A 8 "updateWorkflowState" packages/core/src/memory/adapters/storage/index.ts

Repository: VoltAgent/voltagent

Length of output: 153


🏁 Script executed:

# Get full implementations from other adapters to compare
sed -n '1446,1465p' packages/supabase/src/memory-adapter.ts

Repository: VoltAgent/voltagent

Length of output: 542


🏁 Script executed:

# Find the storage adapter interface definition
fd -type f -name "*.ts" | xargs grep -l "interface.*StorageAdapter\|class.*StorageAdapter" | head -5

Repository: VoltAgent/voltagent

Length of output: 233


🏁 Script executed:

# Check postgres adapter implementation
sed -n '1469,1489p' packages/postgres/src/memory-adapter.ts

Repository: VoltAgent/voltagent

Length of output: 547


🏁 Script executed:

# Check libsql adapter implementation
sed -n '1292,1312p' packages/libsql/src/memory-core.ts

Repository: VoltAgent/voltagent

Length of output: 630


🏁 Script executed:

# Check cloudflare-d1 adapter implementation
sed -n '1516,1536p' packages/cloudflare-d1/src/memory-adapter.ts

Repository: VoltAgent/voltagent

Length of output: 637


Fix inconsistent error handling in updateWorkflowState.

The Redis adapter returns silently when workflow state is not found, while all other adapters (in-memory, Supabase, Postgres, Libsql, Cloudflare-d1) throw an error. Additionally, the Redis adapter's updateConversation throws ConversationNotFoundError, making error handling inconsistent even within the same adapter. Throw an error when the workflow state is not found to match the pattern across all other storage adapters.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/redis/src/memory-adapter.ts` around lines 411 - 416, The
updateWorkflowState method currently returns silently when
getWorkflowState(executionId) returns null; change this to throw a not-found
error instead to match other adapters: in updateWorkflowState, after const
existing = await this.getWorkflowState(executionId); if (!existing) throw a
descriptive not-found error (e.g., reuse ConversationNotFoundError for
consistency with updateConversation or introduce/throw a
WorkflowStateNotFoundError) including the executionId in the message so callers
can handle missing workflow state the same way as other storage adapters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Redis Persistence/Caching for Agent Memory

1 participant