Managing keys inside Valkey/Redis is crucial, especially when we need to test in a different environment or restore a partial/specific key-value dataset for a migration or production movement event. Although we can use a full RDB snapshot + AOF to get the full data set, that is not always feasible if we focus on a limited data set requirement.
In this blog post, we will explore how we can use some internal options to meet the above-mentioned requirements.
First, let’s see how we can take backup/dump with respect to a specific key here. Below is our key or dataset, which we will dump and then restore.
1 2 | 127.0.0.1:6379> set mykey myvalue OK |
For taking dump we will use option [–raw dump], which converts the data to binary format. Here we are making a backup of the above-created key (“mykey”).
1 | shell> valkey-cli -p 6379 -h 127.0.0.1 --raw dump mykey| head -c -1 > mykey |
1 | shell> hexdump -C mykey |
Output:
1 2 3 | 00000000 00 07 6d 79 76 61 6c 75 65 0b 00 77 6c ac 3b 18 |..myvalue..wl.;.| 00000010 35 61 78 |5ax| 00000013 |
Note – The same command should work in the original Redis flavor as well. All we need to do is just replace the “valkey-cli” with the “redis-cli” in this and the further content we are going to discuss.
E.g.,
1 | shell> redis-cli -p 6379 -h 127.0.0.1 --raw dump mykey| head -c -1 > mykey |
Now, let’s see how we can restore the key.
Here, we are using option [restore] to restore the dumped key with a different key name[“mynewkey”] under the implicit database with no TTL/Expiry[“0”] set.
1 | shell> cat mykey | valkey-cli -p 6379 -h 127.0.0.1 -x restore mynewkey 0 |
We can verify the restored data now as below.
1 2 3 4 | shell> valkey-cli -p 6379 -h 127.0.0.1 … 127.0.0.1:6379> get mynewkey "myvalue" |
Well, if you are looking to match some pattern or fetch all of the keys in a specific database, then the handy command-line snippet below can be useful.
Here, we are fetching all the keys from the default database[0] and copying the same over to the target database[1] in the same instance. We can also copy the keys over the remote instance as well.
1 2 3 4 5 | valkey-cli -h 127.0.0.1 -p 6379 -n 0 keys '*' | while read key; do echo "Copying $key" valkey-cli -h 127.0.0.1 -p 6379 --raw -n 0 DUMP "$key" | head -c -1 | valkey-cli -x -h 127.0.0.1 -p 6379 -n 1 RESTORE "$key" 0 done |
Output:
1 2 3 4 | Copying mynewkey OK Copying mykey OK |
Database[0]:
1 2 3 | 127.0.0.1:6379> keys * 1) "mynewkey" 2) "mykey" |
Database[1]:
1 2 3 4 5 | 127.0.0.1:6379> select 1 OK 127.0.0.1:6379[1]> keys * 1) "mynewkey" 2) "mykey" |
Alternatively, we can also use the “MIGRATE” command, which internally uses the dump/restore and comes with certain options like [COPY,REPLACE] to decide the behavior while migrating. The important thing to remember is that if we don’t add COPY in the command, the keys will be deleted from the source instance once the KEYS are migrated.
Here we are migrating the existing keys from the default database[0] to database[2] in the same instance while defining the idle timeout of (~10 milliseconds).
1 2 3 4 | valkey-cli -h 127.0.0.1 -p 6379 --raw KEYS '*' | while read key; do echo "Migrating $key" valkey-cli -h 127.0.0.1 -p 6379 MIGRATE 127.0.0.1 6379 "$key" 2 10 COPY done |
Database[0]:
1 2 3 | 127.0.0.1:6379> keys * 1) "mykey" 2) "mykey2" |
Database[2]:
1 2 3 4 5 | 127.0.0.1:6379> select 2 OK 127.0.0.1:6379[2]> keys * 1) "mykey" 2) "mykey2" |
For larger datasets using KEYS ‘*’ could be inefficient and block the workload under high key operations. There, you can consider using SCAN instead for better performance and less overall impact.
Basically, SCAN iterates through the keys incrementally without blocking the server while returning a small number of keys per call.
In a single command-line script, we are copying the keys from the default database[0] to database[5] in the same instance. We can also run the DUMP/RESTORE separately based on the requirement.
1 2 3 4 5 6 | shell> valkey-cli -h 127.0.0.1 -p 6379 -n 0 --scan | while read key; do echo "Copying $key"; valkey-cli --raw -h 127.0.0.1 -p 6379 -n 0 DUMP "$key" | head -c -1 | valkey-cli -x -h 127.0.0.1 -p 6379 -n 5 RESTORE "$key" 0; done |
Output:
1 2 3 4 | Copying key1 OK Copying key OK |
Under database[5], we can see the imported keys.
1 2 3 4 5 | 127.0.0.1:6379> select 5 OK 127.0.0.1:6379[5]> keys * 1) "key1" 2) "key" |
While using SCAN, we can also use the MATCH filter to check for a specific pattern.
1 2 3 4 5 | 127.0.0.1:6379> scan 0 MATCH mp* 1) "0" 2) 1) "mp" 2) "mpj" 3) "mp:hello" |
If we want to get insight into any complicated key type like SETS or HASH, we can use options like [HSCAN, SSCAN, ZSCAN].
E.g.,
Below, we have a HASH type of key, which can be read further with HSCAN for its elements.
1 2 | 127.0.0.1:6379> type user:102 hash |
…
1 2 3 4 5 6 7 8 | 127.0.0.1:6379> HSCAN user:102 0 1) "0" 2) 1) "name" 2) "pj" 3) "address" 4) "India" 5) "age" 6) "10" |
Apart from the native commands, we have a very useful parser/tool [rdbtools] in Redis that offers RDB dump file parsing and generating memory reports of the KEYS. However, running the same for Valkey might not be compatible, and you can face errors like “Invalid RDB version number.”
E.g.,
a) Using the JSON option in the rdb command line we can get the KEYS information in JSON format.
1 | shell> rdb --command json /var/lib/redis/dump.rdb |
Output:
1 2 | "user:124":{"name":"tj"}, "user:123":{"name":"aj"}, |
b) Using the memory option, we can get more insight into the size and element of the keys.
1 | shell> rdb -c memory /var/lib/redis/dump.rdb |
Output:
1 2 3 4 | database,type,key,size_in_bytes,encoding,num_elements,len_largest_element,expiry 0,hash,user:124,77,ziplist,1,4, 0,hash,user:123,77,ziplist,1,4, 0,string,a,56,string,1,1, |
Wrap-up:
Both Redis and its open source alternative, Valkey, support the discussed options to simplify the key copy/migrate process. We also highlighted how SCAN usage can be particularly helpful, especially in busy environments or when insights into advanced key types are needed.