Managing keys inside Valkey/Redis is crucial, especially when we need to test in a different environment or restore a partial/specific key-value dataset for a migration or production movement event. Although we can use a full RDB snapshot + AOF to get the full data set, that is not always feasible if we focus on a limited data set requirement.

In this blog post, we will explore how we can use some internal options to meet the above-mentioned requirements.

First, let’s see how we can take backup/dump with respect to a specific key here. Below is our key or dataset, which we will dump and then restore.

For taking dump we will use option [–raw dump], which converts the data to binary format. Here we are making a backup of the above-created key (“mykey”).

Output:

Note – The same command should work in the original Redis flavor as well. All we need to do is just replace the “valkey-cli” with the “redis-cli” in this and the further content we are going to discuss.

E.g.,

Now, let’s see how we can restore the key.

Here, we are using option [restore] to restore the dumped key with a different key name[“mynewkey”] under the implicit database with no TTL/Expiry[“0”] set.

We can verify the restored data now as below.

Well, if you are looking to match some pattern or fetch all of the keys in a specific database, then the handy command-line snippet below can be useful.

Here, we are fetching all the keys from the default database[0] and copying the same over to the target database[1] in the same instance. We can also copy the keys over the remote instance as well.

Output:

Database[0]:

Database[1]:

Alternatively, we can also use the “MIGRATE command, which internally uses the dump/restore and comes with certain options like [COPY,REPLACE] to decide the behavior while migrating. The important thing to remember is that if we don’t add COPY in the command, the keys will be deleted from the source instance once the KEYS are migrated.

Here we are migrating the existing keys from the default database[0] to database[2] in the same instance while defining the idle timeout of (~10 milliseconds).

Database[0]:

Database[2]:

For larger datasets using KEYS ‘*’ could be inefficient and block the workload under high key operations. There, you can consider using SCAN instead for better performance and less overall impact.

Basically, SCAN iterates through the keys incrementally without blocking the server while returning a small number of keys per call.

In a single command-line script, we are copying the keys from the default database[0] to database[5] in the same instance. We can also run the DUMP/RESTORE separately based on the requirement.

Output:

Under database[5], we can see the imported keys.

While using SCAN, we can also use the MATCH filter to check for a specific pattern.

If we want to get insight into any complicated key type like SETS or HASH, we can use options like [HSCANSSCANZSCAN].

E.g.,

Below, we have a HASH type of key, which can be read further with HSCAN for its elements.

Apart from the native commands, we have a very useful parser/tool [rdbtools] in Redis that offers RDB dump file parsing and generating memory reports of the KEYS. However, running the same for Valkey might not be compatible, and you can face errors like “Invalid RDB version number.”

E.g.,

    a) Using the JSON option in the rdb command line we can get the KEYS information in JSON format.

Output:

 b) Using the memory option, we can get more insight into the size and element of the keys.

Output:

Wrap-up:

Both Redis and its open source alternative, Valkey, support the discussed options to simplify the key copy/migrate process. We also highlighted how SCAN usage can be particularly helpful, especially in busy environments or when insights into advanced key types are needed.

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments