Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Redis rdb CLI : A CLI tool that can parse, filter, split, merge rdb and analyze memory usage offline. It can also sync 2 redis data and allow user define their own sink service to migrate redis data to somewhere.

License

NotificationsYou must be signed in to change notification settings

leonchen83/redis-rdb-cli

Repository files navigation

Buy Me A Coffee

A tool that can parse, filter, split, merge rdb and analyze memory usage offline. It can also sync 2 redis data and allow user define there own sink service to migrate redis data to somewhere.

Java CIGitterHex.pm

Chat with author

Join the chat at https://gitter.im/leonchen83/redis-rdb-cli

Contract the author

chen.bao.yi@gmail.com

Binary release

binary releases

Runtime requirement

jdk 1.8+

Install

$ wget https://github.com/leonchen83/redis-rdb-cli/releases/download/${version}/redis-rdb-cli-release.zip$ unzip redis-rdb-cli-release.zip$cd ./redis-rdb-cli/bin$ ./rct -h

Compile requirement

jdk 1.8+maven-3.3.1+

Compile & run

$ git clone https://github.com/leonchen83/redis-rdb-cli.git$cd redis-rdb-cli$ mvn clean install -Dmaven.test.skip=true$cd target/redis-rdb-cli-release/redis-rdb-cli/bin$ ./rct -h

Run in docker

# run with jvm$ docker run -it --rm redisrdbcli/redis-rdb-cli:latest$ rct -V# run without jvm$ docker run -it --rm redisrdbcli/redis-rdb-cli:latest-native$ rct -V

Build native image via graalvm in docker

$ docker build -m 8g -f DockerfileNative -t redisrdbcli:redis-rdb-cli.$ docker run -it redisrdbcli:redis-rdb-cli bash$ bash-5.1# rct -V

Windows Environment Variables

Add/path/to/redis-rdb-cli/bin toPath environment variable

Usage

Redis mass insertion

$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -r$ cat /path/to/dump.aof| /redis/src/redis-cli -p 6379 --pipe

Convert rdb to dump format

$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof

Convert rdb to json format

$ rct -f json -s /path/to/dump.rdb -o /path/to/dump.json

Numbers of key in rdb

$ rct -f count -s /path/to/dump.rdb -o /path/to/dump.csv

Find top 50 largest keys

$ rct -f mem -s /path/to/dump.rdb -o /path/to/dump.mem -l 50

Diff rdb

$ rct -f diff -s /path/to/dump1.rdb -o /path/to/dump1.diff$ rct -f diff -s /path/to/dump2.rdb -o /path/to/dump2.diff$ diff /path/to/dump1.diff /path/to/dump2.diff

Convert rdb to RESP

$ rct -f resp -s /path/to/dump.rdb -o /path/to/appendonly.aof

Sync with 2 redis

$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r

Sync single redis to redis cluster

$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:30001 -r -d 0

Handle infinite loop in rst command

# set client-output-buffer-limit in source redis$ redis-cli configset client-output-buffer-limit"slave 0 0 0"$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r

Migrate rdb to remote redis

$ rmt -s /path/to/dump.rdb -m redis://192.168.1.105:6379 -r

Downgrade migration

# Migrate data from redis-7 to redis-6# About dump_rdb_version please see comment in redis-rdb-cli.conf$ sed -i's/dump_rdb_version=-1/dump_rdb_version=9/g' /path/to/redis-rdb-cli/conf/redis-rdb-cli.conf$ rmt -s redis://com.redis7:6379 -m redis://com.redis6:6379 -r

Handle big key in migration

# set proto-max-bulk-len in target redis$ redis-cli -h${host} -p 6380 -a${pwd} configset proto-max-bulk-len 2048mb# set Xms Xmx in redis-rdb-cli node$export JAVA_TOOL_OPTIONS="-Xms8g -Xmx8g"# execute migration$ rmt -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r

Migrate rdb to remote redis cluster

$ rmt -s /path/to/dump.rdb -c ./nodes-30001.conf -r

or simply use following cmd withoutnodes-30001.conf

$ rmt -s /path/to/dump.rdb -m redis://127.0.0.1:30001 -r

Backup remote rdb

$ rdt -b redis://192.168.1.105:6379 -o /path/to/dump.rdb

Backup remote rdb and convert db to dest db

$ rdt -b redis://192.168.1.105:6379 -o /path/to/dump.rdb --goal 3

Filter rdb

$ rdt -b /path/to/dump.rdb -o /path/to/filtered-dump.rdb -d 0 -t string

Split rdb via cluster's nodes.conf

$ rdt -s ./dump.rdb -c ./nodes.conf -o /path/to/folder -d 0

Merge multi rdb to one

$ rdt -m ./dump1.rdb ./dump2.rdb -o ./dump.rdb -thash

Cut aof-use-rdb-preamble file to rdb file and aof file

$ rcut -s ./aof-use-rdb-preamble.aof -r ./dump.rdb -a ./appendonly.aof

Other parameter

More configurable parameter can be modified in/path/to/redis-rdb-cli/conf/redis-rdb-cli.conf

Filter

  1. rct,rdt andrmt these 3 commands support data filter bytype,db andkey RegEx(Java style).
  2. rst this command only support data filter bydb.

For example:

$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -d 0$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -t stringhash$ rmt -s /path/to/dump.rdb -m redis://192.168.1.105:6379 -r -d 0 1 -t list$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -d 0

Monitor redis server

# step1# open file `/path/to/redis-rdb-cli/conf/redis-rdb-cli.conf`# change property `metric_gateway from `none` to `influxdb`## step2$cd /path/to/redis-rdb-cli/dashboard$ docker-compose up -d## step3$ rmonitor -s redis://127.0.0.1:6379 -n standalone$ rmonitor -s redis://127.0.0.1:30001 -n cluster$ rmonitor -s redis-sentinel://sntnl-usr:sntnl-pwd@127.0.0.1:26379?master=mymaster&authUser=usr&authPassword=pwd -n sentinel## step4# open url `http://localhost:3000/d/monitor/monitor`, login grafana use `admin`, `admin` and check monitor result.

monitor

Difference between rmt and rst

  1. Whenrmt started. source redis first doBGSAVE and generate a snapshot rdb file.rmt command migrate this snapshot file to target redis. after this process done,rmt terminated.
  2. rst not only migrate snapshot rdb file but also incremental data from source redis. sorst never terminated except typeCTRL+C.rst only supportdb filter more details please refer toLimitation of migration

Dashboard

Sincev0.1.9, therct -f mem support showing result in grafana dashboard like the following:
memory

If you want to turn it on. youMUST installdocker anddocker-compose first, the installation please refer todocker
Then run the following command:

$cd /path/to/redis-rdb-cli/dashboard# start$ docker-compose up -d# stop$ docker-compose down

cd /path/to/redis-rdb-cli/conf/redis-rdb-cli.conf
Then change parametermetric_gateway fromnone toinfluxdb.

Openhttp://localhost:3000 to check therct -f mem's result.

If you deployed this tool in multi instance, you need to change parametermetric_instance to make sure unique between instances.

Redis 6

Redis 6 SSL

  1. use openssl to generate keystore
$cd /path/to/redis-6.0-rc1$ ./utils/gen-test-certs.sh$cd tests/tls$ openssl pkcs12 -export -CAfile ca.crt -in redis.crt -inkey redis.key -out redis.p12
  1. If source redis and target redis use the same keystore. then config following parameters
    source_keystore_path andtarget_keystore_path to point to/path/to/redis-6.0-rc1/tests/tls/redis.p12
    setsource_keystore_pass andtarget_keystore_pass

  2. after config ssl parameters userediss://host:port in your command to open ssl, for example:rst -s rediss://127.0.0.1:6379 -m rediss://127.0.0.1:30001 -r -d 0

Redis 6 ACL

  1. use following URI to open redis ACL support
$ rst -s redis://user:pass@127.0.0.1:6379 -m redis://user:pass@127.0.0.1:6380 -r -d 0
  1. userMUST have+@all permission to handle commands

Hack rmt

Rmt threading model

Thermt command use the following 4 parameters(redis-rdb-cli.conf) to migrate data to remote.

migrate_batch_size=4096migrate_threads=4migrate_flush=yesmigrate_retries=1

The most important parameter ismigrate_threads=4. this means we use the following threading model to migrate data.

single redis ----> single redis+--------------+         +----------+     thread 1      +--------------+|              |    +----| Endpoint |-------------------|              ||              |    |    +----------+                   |              ||              |    |                                   |              ||              |    |    +----------+     thread 2      |              ||              |    |----| Endpoint |-------------------|              ||              |    |    +----------+                   |              || Source Redis |----|                                   | Target Redis ||              |    |    +----------+     thread 3      |              ||              |    |----| Endpoint |-------------------|              ||              |    |    +----------+                   |              ||              |    |                                   |              ||              |    |    +----------+     thread 4      |              ||              |    +----| Endpoint |-------------------|              |+--------------+         +----------+                   +--------------+
single redis ----> redis cluster+--------------+         +----------+     thread 1      +--------------+|              |    +----| Endpoints|-------------------|              ||              |    |    +----------+                   |              ||              |    |                                   |              ||              |    |    +----------+     thread 2      |              ||              |    |----| Endpoints|-------------------|              ||              |    |    +----------+                   |              || Source Redis |----|                                   | Redis cluster||              |    |    +----------+     thread 3      |              ||              |    |----| Endpoints|-------------------|              ||              |    |    +----------+                   |              ||              |    |                                   |              ||              |    |    +----------+     thread 4      |              ||              |    +----| Endpoints|-------------------|              |+--------------+         +----------+                   +--------------+

The difference between cluster migration and single migration isEndpoint andEndpoints. In cluster migration theEndpoints contains multiEndpoint to point to everymaster instance in cluster. For example:

3 masters 3 replicas redis cluster. ifmigrate_threads=4 then we have3 * 4 = 12 connections that connected withmaster instance.

Migration performance

The following 3 parameters affect migration performance

migrate_batch_size=4096migrate_retries=1migrate_flush=yes
  1. migrate_batch_size: By default we use redispipeline to migrate data to remote. themigrate_batch_size is thepipeline batch size. ifmigrate_batch_size=1 then thepipeline devolved into 1 single command to sent and wait the response from remote.
  2. migrate_retries: Themigrate_retries=1 means if socket error occurred. we recreate a new socket and retry to send that failed command to target redis withmigrate_retries times.
  3. migrate_flush: Themigrate_flush=yes means we write every 1 command to socket. then we invokeSocketOutputStream.flush() immediately. ifmigrate_flush=no we invokeSocketOutputStream.flush() when write to socket every 64KB. notice that this parameter also affectmigrate_retries. themigrate_retries only take effect whenmigrate_flush=yes.

Migration principle

+---------------+             +-------------------+    restore      +---------------+ |               |             | redis dump format |---------------->|               ||               |             |-------------------|    restore      |               ||               |   convert   | redis dump format |---------------->|               ||    Dump rdb   |------------>|-------------------|    restore      |  Targe Redis  ||               |             | redis dump format |---------------->|               ||               |             |-------------------|    restore      |               ||               |             | redis dump format |---------------->|               |+---------------+             +-------------------+                 +---------------+

Limitation of migration

  1. We use cluster'snodes.conf to migrate data to cluster. because of we didn't handle theMOVEDASK redirection. so limitation of cluster migration is that the clusterMUST in stable state during the migration. this means the clusterMUST have nomigrating,importing slot and no switch slave to master.
  2. If userst migrate data to cluster. the following commands not supportedPUBLISH,SWAPDB,MOVE,FLUSHALL,FLUSHDB,MULTI,EXEC,SCRIPT FLUSH,SCRIPT LOAD,EVAL,EVALSHA. and the following commandsRPOPLPUSH,SDIFFSTORE,SINTERSTORE,SMOVE,ZINTERSTORE,ZUNIONSTORE,DEL,UNLINK,RENAME,RENAMENX,PFMERGE,PFCOUNT,MSETNX,BRPOPLPUSH,BITOP,MSET,COPY,BLMOVE,LMOVE,ZDIFFSTORE,GEOSEARCHSTOREONLY SUPPORT WHEN THESE COMMAND KEYS IN THE SAME SLOT(eg:del {user}:1 {user}:2)

Hack ret

What ret command do

  1. ret command that allow user define there own sink service like sink redis data tomysql ormongodb.
  2. ret command using Java SPI extension to do this job.

How to implement a sink service

User should follow the steps below to implement a sink service.

  1. create a java project using maven pom.xml
<?xml version="1.0" encoding="UTF-8"?><projectxmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">    <modelVersion>4.0.0</modelVersion>        <groupId>com.your.company</groupId>    <artifactId>your-sink-service</artifactId>    <version>1.0.0</version>        <properties>        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>        <maven.compiler.source>1.8</maven.compiler.source>        <maven.compiler.target>1.8</maven.compiler.target>    </properties>    <dependencies>        <dependency>            <groupId>com.moilioncircle</groupId>            <artifactId>redis-rdb-cli-api</artifactId>            <version>1.8.0</version>            <scope>provided</scope>        </dependency>        <dependency>            <groupId>com.moilioncircle</groupId>            <artifactId>redis-replicator</artifactId>            <version>[3.6.4, )</version>            <scope>provided</scope>        </dependency>        <dependency>            <groupId>org.slf4j</groupId>            <artifactId>slf4j-api</artifactId>            <version>1.7.25</version>            <scope>provided</scope>        </dependency><!--        <dependency>            other dependencies        </dependency>-->            </dependencies>        <build>        <plugins>            <plugin>                <artifactId>maven-assembly-plugin</artifactId>                <version>3.1.0</version>                <configuration>                    <descriptorRefs>                        <descriptorRef>jar-with-dependencies</descriptorRef>                    </descriptorRefs>                </configuration>                <executions>                    <execution>                        <id>make-assembly</id>                        <phase>package</phase>                        <goals>                            <goal>single</goal>                        </goals>                    </execution>                </executions>            </plugin>            <plugin>                <groupId>org.apache.maven.plugins</groupId>                <artifactId>maven-compiler-plugin</artifactId>                <version>3.8.1</version>                <configuration>                    <source>${maven.compiler.source}</source>                    <target>${maven.compiler.target}</target>                    <encoding>${project.build.sourceEncoding}</encoding>                </configuration>            </plugin>        </plugins>    </build></project>
  1. implementSinkService interface
publicclassYourSinkServiceimplementsSinkService {@OverridepublicStringsink() {return"your-sink-service";    }@Overridepublicvoidinit(Fileconfig)throwsIOException {// parse your external sink config    }@OverridepublicvoidonEvent(Replicatorreplicator,Eventevent) {// your sink business    }}
  1. register this service using Java SPI
# create com.moilioncircle.redis.rdb.cli.api.sink.SinkService file in src/main/resources/META-INF/services/|-src|____main| |____resources| | |____META-INF| | | |____services| | | | |____com.moilioncircle.redis.rdb.cli.api.sink.SinkService# add following content in com.moilioncircle.redis.rdb.cli.api.sink.SinkServiceyour.package.YourSinkService
  1. package and deploy
$ mvn clean install$ cp ./target/your-sink-service-1.0.0-jar-with-dependencies.jar /path/to/redis-rdb-cli/lib
  1. run your sink service
$ ret -s redis://127.0.0.1:6379 -c config.conf -n your-sink-service
  1. debug your sink service
publicstaticvoidmain(String[]args)throwsException {Replicatorreplicator =newRedisReplicator("redis://127.0.0.1:6379");Runtime.getRuntime().addShutdownHook(newThread(() -> {Replicators.closeQuietly(replicator);        }));replicator.addExceptionListener((rep,tx,e) -> {thrownewRuntimeException(tx.getMessage(),tx);        });SinkServicesink =newYourSinkService();sink.init(newFile("/path/to/your-sink.conf"));replicator.addEventListener(newAsyncEventListener(sink,replicator,4,Executors.defaultThreadFactory()));replicator.open();    }

How to implement a formatter service

  1. createYourFormatterService extendAbstractFormatterService
publicclassYourFormatterServiceextendsAbstractFormatterService {@OverridepublicStringformat() {return"test";    }@OverridepublicEventapplyString(Replicatorreplicator,RedisInputStreamin,intversion,byte[]key,inttype,ContextKeyValuePaircontext)throwsIOException {byte[]val =newDefaultRdbValueVisitor(replicator).applyString(in,version);getEscaper().encode(key,getOutputStream());getEscaper().encode(val,getOutputStream());getOutputStream().write('\n');returncontext;    }}
  1. register this formatter using Java SPI
# create com.moilioncircle.redis.rdb.cli.api.format.FormatterService file in src/main/resources/META-INF/services/|-src|____main| |____resources| | |____META-INF| | | |____services| | | | |____com.moilioncircle.redis.rdb.cli.api.format.FormatterService# add following content in com.moilioncircle.redis.rdb.cli.api.format.FormatterServiceyour.package.YourFormatterService
  1. package and deploy
$ mvn clean install$ cp ./target/your-service-1.0.0-jar-with-dependencies.jar /path/to/redis-rdb-cli/lib
  1. run your formatter service
$ rct -ftest -s redis://127.0.0.1:6379 -o ./out.csv -t string -d 0 -e json

Contributors

Consulting

Commercial support forredis-rdb-cli is available. The following services are currently available:

  • Onsite consulting. $10,000 per day
  • Onsite training. $10,000 per day

You may also contactBaoyi Chen directly, mail tochen.bao.yi@gmail.com.

Supported by 宁文君

27 January 2023, A sad day that I lost my mother 宁文君, She was encouraging and supporting me in developing this tool. Every time a company uses this tool, she got excited like a child and encouraged me to keep going.Without her I couldn't have maintained this tool for so many years. Even I didn't achieve much but she is still proud of me, R.I.P and hope God bless her.

Supported by IntelliJ IDEA

IntelliJ IDEA is a Java integrated development environment (IDE) for developing computer software.
It is developed by JetBrains (formerly known as IntelliJ), and is available as an Apache 2 Licensed community edition,
and in a proprietary commercial edition. Both can be used for commercial development.

About

Redis rdb CLI : A CLI tool that can parse, filter, split, merge rdb and analyze memory usage offline. It can also sync 2 redis data and allow user define their own sink service to migrate redis data to somewhere.

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp