- Notifications
You must be signed in to change notification settings - Fork90
Redis rdb CLI : A CLI tool that can parse, filter, split, merge rdb and analyze memory usage offline. It can also sync 2 redis data and allow user define their own sink service to migrate redis data to somewhere.
License
leonchen83/redis-rdb-cli
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
A tool that can parse, filter, split, merge rdb and analyze memory usage offline. It can also sync 2 redis data and allow user define there own sink service to migrate redis data to somewhere.
jdk 1.8+
$ wget https://github.com/leonchen83/redis-rdb-cli/releases/download/${version}/redis-rdb-cli-release.zip$ unzip redis-rdb-cli-release.zip$cd ./redis-rdb-cli/bin$ ./rct -h
jdk 1.8+maven-3.3.1+
$ git clone https://github.com/leonchen83/redis-rdb-cli.git$cd redis-rdb-cli$ mvn clean install -Dmaven.test.skip=true$cd target/redis-rdb-cli-release/redis-rdb-cli/bin$ ./rct -h
# run with jvm$ docker run -it --rm redisrdbcli/redis-rdb-cli:latest$ rct -V# run without jvm$ docker run -it --rm redisrdbcli/redis-rdb-cli:latest-native$ rct -V
$ docker build -m 8g -f DockerfileNative -t redisrdbcli:redis-rdb-cli.$ docker run -it redisrdbcli:redis-rdb-cli bash$ bash-5.1# rct -V
Add/path/to/redis-rdb-cli/bin
toPath
environment variable
$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -r$ cat /path/to/dump.aof| /redis/src/redis-cli -p 6379 --pipe
$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof
$ rct -f json -s /path/to/dump.rdb -o /path/to/dump.json
$ rct -f count -s /path/to/dump.rdb -o /path/to/dump.csv
$ rct -f mem -s /path/to/dump.rdb -o /path/to/dump.mem -l 50
$ rct -f diff -s /path/to/dump1.rdb -o /path/to/dump1.diff$ rct -f diff -s /path/to/dump2.rdb -o /path/to/dump2.diff$ diff /path/to/dump1.diff /path/to/dump2.diff
$ rct -f resp -s /path/to/dump.rdb -o /path/to/appendonly.aof
$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r
$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:30001 -r -d 0
# set client-output-buffer-limit in source redis$ redis-cli configset client-output-buffer-limit"slave 0 0 0"$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r
$ rmt -s /path/to/dump.rdb -m redis://192.168.1.105:6379 -r
# Migrate data from redis-7 to redis-6# About dump_rdb_version please see comment in redis-rdb-cli.conf$ sed -i's/dump_rdb_version=-1/dump_rdb_version=9/g' /path/to/redis-rdb-cli/conf/redis-rdb-cli.conf$ rmt -s redis://com.redis7:6379 -m redis://com.redis6:6379 -r
# set proto-max-bulk-len in target redis$ redis-cli -h${host} -p 6380 -a${pwd} configset proto-max-bulk-len 2048mb# set Xms Xmx in redis-rdb-cli node$export JAVA_TOOL_OPTIONS="-Xms8g -Xmx8g"# execute migration$ rmt -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r
$ rmt -s /path/to/dump.rdb -c ./nodes-30001.conf -r
or simply use following cmd withoutnodes-30001.conf
$ rmt -s /path/to/dump.rdb -m redis://127.0.0.1:30001 -r
$ rdt -b redis://192.168.1.105:6379 -o /path/to/dump.rdb
$ rdt -b redis://192.168.1.105:6379 -o /path/to/dump.rdb --goal 3
$ rdt -b /path/to/dump.rdb -o /path/to/filtered-dump.rdb -d 0 -t string
$ rdt -s ./dump.rdb -c ./nodes.conf -o /path/to/folder -d 0
$ rdt -m ./dump1.rdb ./dump2.rdb -o ./dump.rdb -thash
$ rcut -s ./aof-use-rdb-preamble.aof -r ./dump.rdb -a ./appendonly.aof
More configurable parameter can be modified in/path/to/redis-rdb-cli/conf/redis-rdb-cli.conf
rct
,rdt
andrmt
these 3 commands support data filter bytype
,db
andkey
RegEx(Java style).rst
this command only support data filter bydb
.
For example:
$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -d 0$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -t stringhash$ rmt -s /path/to/dump.rdb -m redis://192.168.1.105:6379 -r -d 0 1 -t list$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -d 0
# step1# open file `/path/to/redis-rdb-cli/conf/redis-rdb-cli.conf`# change property `metric_gateway from `none` to `influxdb`## step2$cd /path/to/redis-rdb-cli/dashboard$ docker-compose up -d## step3$ rmonitor -s redis://127.0.0.1:6379 -n standalone$ rmonitor -s redis://127.0.0.1:30001 -n cluster$ rmonitor -s redis-sentinel://sntnl-usr:sntnl-pwd@127.0.0.1:26379?master=mymaster&authUser=usr&authPassword=pwd -n sentinel## step4# open url `http://localhost:3000/d/monitor/monitor`, login grafana use `admin`, `admin` and check monitor result.
- When
rmt
started. source redis first doBGSAVE
and generate a snapshot rdb file.rmt
command migrate this snapshot file to target redis. after this process done,rmt
terminated. rst
not only migrate snapshot rdb file but also incremental data from source redis. sorst
never terminated except typeCTRL+C
.rst
only supportdb
filter more details please refer toLimitation of migration
Sincev0.1.9
, therct -f mem
support showing result in grafana dashboard like the following:
If you want to turn it on. youMUST installdocker
anddocker-compose
first, the installation please refer todocker
Then run the following command:
$cd /path/to/redis-rdb-cli/dashboard# start$ docker-compose up -d# stop$ docker-compose down
cd /path/to/redis-rdb-cli/conf/redis-rdb-cli.conf
Then change parametermetric_gateway fromnone
toinfluxdb
.
Openhttp://localhost:3000
to check therct -f mem
's result.
If you deployed this tool in multi instance, you need to change parametermetric_instance to make sure unique between instances.
- use openssl to generate keystore
$cd /path/to/redis-6.0-rc1$ ./utils/gen-test-certs.sh$cd tests/tls$ openssl pkcs12 -export -CAfile ca.crt -in redis.crt -inkey redis.key -out redis.p12
If source redis and target redis use the same keystore. then config following parameters
source_keystore_path andtarget_keystore_path to point to/path/to/redis-6.0-rc1/tests/tls/redis.p12
setsource_keystore_pass andtarget_keystore_passafter config ssl parameters use
rediss://host:port
in your command to open ssl, for example:rst -s rediss://127.0.0.1:6379 -m rediss://127.0.0.1:30001 -r -d 0
- use following URI to open redis ACL support
$ rst -s redis://user:pass@127.0.0.1:6379 -m redis://user:pass@127.0.0.1:6380 -r -d 0
user
MUST have+@all
permission to handle commands
Thermt
command use the following 4 parameters(redis-rdb-cli.conf) to migrate data to remote.
migrate_batch_size=4096migrate_threads=4migrate_flush=yesmigrate_retries=1
The most important parameter ismigrate_threads=4
. this means we use the following threading model to migrate data.
single redis ----> single redis+--------------+ +----------+ thread 1 +--------------+| | +----| Endpoint |-------------------| || | | +----------+ | || | | | || | | +----------+ thread 2 | || | |----| Endpoint |-------------------| || | | +----------+ | || Source Redis |----| | Target Redis || | | +----------+ thread 3 | || | |----| Endpoint |-------------------| || | | +----------+ | || | | | || | | +----------+ thread 4 | || | +----| Endpoint |-------------------| |+--------------+ +----------+ +--------------+
single redis ----> redis cluster+--------------+ +----------+ thread 1 +--------------+| | +----| Endpoints|-------------------| || | | +----------+ | || | | | || | | +----------+ thread 2 | || | |----| Endpoints|-------------------| || | | +----------+ | || Source Redis |----| | Redis cluster|| | | +----------+ thread 3 | || | |----| Endpoints|-------------------| || | | +----------+ | || | | | || | | +----------+ thread 4 | || | +----| Endpoints|-------------------| |+--------------+ +----------+ +--------------+
The difference between cluster migration and single migration isEndpoint
andEndpoints
. In cluster migration theEndpoints
contains multiEndpoint
to point to everymaster
instance in cluster. For example:
3 masters 3 replicas redis cluster. ifmigrate_threads=4
then we have3 * 4 = 12
connections that connected withmaster
instance.
The following 3 parameters affect migration performance
migrate_batch_size=4096migrate_retries=1migrate_flush=yes
migrate_batch_size
: By default we use redispipeline
to migrate data to remote. themigrate_batch_size
is thepipeline
batch size. ifmigrate_batch_size=1
then thepipeline
devolved into 1 single command to sent and wait the response from remote.migrate_retries
: Themigrate_retries=1
means if socket error occurred. we recreate a new socket and retry to send that failed command to target redis withmigrate_retries
times.migrate_flush
: Themigrate_flush=yes
means we write every 1 command to socket. then we invokeSocketOutputStream.flush()
immediately. ifmigrate_flush=no
we invokeSocketOutputStream.flush()
when write to socket every 64KB. notice that this parameter also affectmigrate_retries
. themigrate_retries
only take effect whenmigrate_flush=yes
.
+---------------+ +-------------------+ restore +---------------+ | | | redis dump format |---------------->| || | |-------------------| restore | || | convert | redis dump format |---------------->| || Dump rdb |------------>|-------------------| restore | Targe Redis || | | redis dump format |---------------->| || | |-------------------| restore | || | | redis dump format |---------------->| |+---------------+ +-------------------+ +---------------+
- We use cluster's
nodes.conf
to migrate data to cluster. because of we didn't handle theMOVED
ASK
redirection. so limitation of cluster migration is that the clusterMUST in stable state during the migration. this means the clusterMUST have nomigrating
,importing
slot and no switch slave to master. - If use
rst
migrate data to cluster. the following commands not supportedPUBLISH,SWAPDB,MOVE,FLUSHALL,FLUSHDB,MULTI,EXEC,SCRIPT FLUSH,SCRIPT LOAD,EVAL,EVALSHA
. and the following commandsRPOPLPUSH,SDIFFSTORE,SINTERSTORE,SMOVE,ZINTERSTORE,ZUNIONSTORE,DEL,UNLINK,RENAME,RENAMENX,PFMERGE,PFCOUNT,MSETNX,BRPOPLPUSH,BITOP,MSET,COPY,BLMOVE,LMOVE,ZDIFFSTORE,GEOSEARCHSTORE
ONLY SUPPORT WHEN THESE COMMAND KEYS IN THE SAME SLOT(eg:del {user}:1 {user}:2
)
ret
command that allow user define there own sink service like sink redis data tomysql
ormongodb
.ret
command using Java SPI extension to do this job.
User should follow the steps below to implement a sink service.
- create a java project using maven pom.xml
<?xml version="1.0" encoding="UTF-8"?><projectxmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.your.company</groupId> <artifactId>your-sink-service</artifactId> <version>1.0.0</version> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> </properties> <dependencies> <dependency> <groupId>com.moilioncircle</groupId> <artifactId>redis-rdb-cli-api</artifactId> <version>1.8.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>com.moilioncircle</groupId> <artifactId>redis-replicator</artifactId> <version>[3.6.4, )</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.7.25</version> <scope>provided</scope> </dependency><!-- <dependency> other dependencies </dependency>--> </dependencies> <build> <plugins> <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>3.1.0</version> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> </configuration> <executions> <execution> <id>make-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.8.1</version> <configuration> <source>${maven.compiler.source}</source> <target>${maven.compiler.target}</target> <encoding>${project.build.sourceEncoding}</encoding> </configuration> </plugin> </plugins> </build></project>
- implement
SinkService
interface
publicclassYourSinkServiceimplementsSinkService {@OverridepublicStringsink() {return"your-sink-service"; }@Overridepublicvoidinit(Fileconfig)throwsIOException {// parse your external sink config }@OverridepublicvoidonEvent(Replicatorreplicator,Eventevent) {// your sink business }}
- register this service using Java SPI
# create com.moilioncircle.redis.rdb.cli.api.sink.SinkService file in src/main/resources/META-INF/services/|-src|____main| |____resources| | |____META-INF| | | |____services| | | | |____com.moilioncircle.redis.rdb.cli.api.sink.SinkService# add following content in com.moilioncircle.redis.rdb.cli.api.sink.SinkServiceyour.package.YourSinkService
- package and deploy
$ mvn clean install$ cp ./target/your-sink-service-1.0.0-jar-with-dependencies.jar /path/to/redis-rdb-cli/lib
- run your sink service
$ ret -s redis://127.0.0.1:6379 -c config.conf -n your-sink-service
- debug your sink service
publicstaticvoidmain(String[]args)throwsException {Replicatorreplicator =newRedisReplicator("redis://127.0.0.1:6379");Runtime.getRuntime().addShutdownHook(newThread(() -> {Replicators.closeQuietly(replicator); }));replicator.addExceptionListener((rep,tx,e) -> {thrownewRuntimeException(tx.getMessage(),tx); });SinkServicesink =newYourSinkService();sink.init(newFile("/path/to/your-sink.conf"));replicator.addEventListener(newAsyncEventListener(sink,replicator,4,Executors.defaultThreadFactory()));replicator.open(); }
- create
YourFormatterService
extendAbstractFormatterService
publicclassYourFormatterServiceextendsAbstractFormatterService {@OverridepublicStringformat() {return"test"; }@OverridepublicEventapplyString(Replicatorreplicator,RedisInputStreamin,intversion,byte[]key,inttype,ContextKeyValuePaircontext)throwsIOException {byte[]val =newDefaultRdbValueVisitor(replicator).applyString(in,version);getEscaper().encode(key,getOutputStream());getEscaper().encode(val,getOutputStream());getOutputStream().write('\n');returncontext; }}
- register this formatter using Java SPI
# create com.moilioncircle.redis.rdb.cli.api.format.FormatterService file in src/main/resources/META-INF/services/|-src|____main| |____resources| | |____META-INF| | | |____services| | | | |____com.moilioncircle.redis.rdb.cli.api.format.FormatterService# add following content in com.moilioncircle.redis.rdb.cli.api.format.FormatterServiceyour.package.YourFormatterService
- package and deploy
$ mvn clean install$ cp ./target/your-service-1.0.0-jar-with-dependencies.jar /path/to/redis-rdb-cli/lib
- run your formatter service
$ rct -ftest -s redis://127.0.0.1:6379 -o ./out.csv -t string -d 0 -e json
- Baoyi Chen
- Jintao Zhang
- Maz Ahmadi
- Anish Karandikar
- Air
- Raghu Nandan B S
- Special thanks toKater Technologies
Commercial support forredis-rdb-cli
is available. The following services are currently available:
- Onsite consulting. $10,000 per day
- Onsite training. $10,000 per day
You may also contactBaoyi Chen
directly, mail tochen.bao.yi@gmail.com.
27 January 2023, A sad day that I lost my mother 宁文君, She was encouraging and supporting me in developing this tool. Every time a company uses this tool, she got excited like a child and encouraged me to keep going.Without her I couldn't have maintained this tool for so many years. Even I didn't achieve much but she is still proud of me, R.I.P and hope God bless her.
IntelliJ IDEA is a Java integrated development environment (IDE) for developing computer software.
It is developed by JetBrains (formerly known as IntelliJ), and is available as an Apache 2 Licensed community edition,
and in a proprietary commercial edition. Both can be used for commercial development.
About
Redis rdb CLI : A CLI tool that can parse, filter, split, merge rdb and analyze memory usage offline. It can also sync 2 redis data and allow user define their own sink service to migrate redis data to somewhere.