- Notifications
You must be signed in to change notification settings - Fork40
A cross-platform real-time file synchronization tool out of the box based on Golang
License
no-src/gofs
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
English |简体中文
A cross-platform real-time file synchronization tool out of the box based on Golang.
The first needGo installed (version 1.23+ is required), then you can use the belowcommand to installgofs.
go install github.com/no-src/gofs/...@latest
You can use thebuild-docker.sh script to build the docker image and you should clone thisrepository andcd to the root path of the repository first.
$ ./scripts/build-docker.sh
Or pull the docker image directly fromDockerHub with the command below.
$ docker pull nosrc/gofs
For more scripts about release and docker, see thescripts directory.
You can install a program run in the background using the following command on Windows.
go install -ldflags="-H windowsgui" github.com/no-src/gofs/...@latestPlease ensure the source directory and dest directory exists first, replace the following path with your real path.
$ mkdirsource destGenerate the TLS cert file and key file for testing purposes.
The TLS cert and key files are just used byFile Server andRemote Disk Server.
$ go run$GOROOT/src/crypto/tls/generate_cert.go --host 127.0.0.12021/12/30 17:21:54 wrote cert.pem2021/12/30 17:21:54 wrote key.pemLook up our workspace.
$ lscert.pem key.pemsource destSynchronize files between disks byLocal Disk.
sequenceDiagram participant DA as DiskA participant C as Client participant DB as DiskB autonumber C ->> DA: monitor disk DA ->> C: notify change C ->> DA: read file DA ->> C: return file C ->> DB: write fileSynchronize files from server byRemote Disk ServerandRemote Disk Client.
sequenceDiagram participant SD as Server Disk participant S as Server participant C as Client participant CD as Client Disk autonumber S ->> SD: monitor disk C ->> S: connect and auth SD ->> S: notify change S ->> C: notify change C ->> S: pull file S ->> SD: read file SD ->> S: return file S ->> C: send file C ->> CD: write fileSynchronize files to server byRemote Push Server andRemote Push Client.
sequenceDiagram participant CD as Client Disk participant C as Client participant S as Server participant SD as Server Disk autonumber C ->> CD: monitor disk CD ->> C: notify change C ->> CD: read file CD ->> C: return file C ->> S: push file S ->> SD: write fileSynchronize files from SFTP server bySFTP Pull Client.
sequenceDiagram participant CD as Client Disk participant C as Client participant SS as SFTP Server participant SSD as SFTP Server Disk autonumber C ->> SS: pull file SS ->> SSD: read file SSD ->> SS: return file SS ->> C: send file C ->> CD: write fileSynchronize files to SFTP server bySFTP Push Client.
sequenceDiagram participant CD as Client Disk participant C as Client participant SS as SFTP Server participant SSD as SFTP Server Disk autonumber C ->> CD: monitor disk CD ->> C: notify change C ->> CD: read file CD ->> C: return file C ->> SS: push file SS ->> SSD: write fileSynchronize files from MinIO server byMinIO Pull Client.
sequenceDiagram participant CD as Client Disk participant C as Client participant MS as MinIO Server participant MSD as MinIO Server Disk autonumber C ->> MS: pull file MS ->> MSD: read file MSD ->> MS: return file MS ->> C: send file C ->> CD: write fileSynchronize files to MinIO server byMinIO Push Client.
sequenceDiagram participant CD as Client Disk participant C as Client participant MS as MinIO Server participant MSD as MinIO Server Disk autonumber C ->> CD: monitor disk CD ->> C: notify change C ->> CD: read file CD ->> C: return file C ->> MS: push file MS ->> MSD: write fileStart aTask Client to subscribe to theTask Server, then acquire the task and executeit, take theFrom Server for example.
sequenceDiagram participant A as Admin participant TS as Task Server participant SD as Server Disk participant S as Server participant TC as Task Client participant CW as Client Worker participant CD as Client Disk participant TQ as Task Queue autonumber S ->> SD: monitor disk S ->> TS: start task server A ->> TS: create task TC ->> TS: subscribe task TS ->> TC: distribute task TC ->> CW: start worker CW ->> TQ: add to task queue TQ ->> CW: execute task activate CW CW ->> S: connect and auth loop SD ->> S: notify change S ->> CW: notify change CW ->> S: pull file S ->> SD: read file SD ->> S: return file S ->> CW: send file CW ->> CD: write file end deactivate CWMonitor source directory and sync change files to dest directory.
You can use thelogically_delete flag to enable the logically delete and avoid deleting files by mistake.
Set thecheckpoint_count flag to use the checkpoint in the file to reduce transfer unmodified file chunks, bydefaultcheckpoint_count=10, which means it has10+2 checkpoints at most. There are two additional checkpoints atthe head and tail. The first checkpoint is equal to thechunk_size, it is optional. The last checkpoint is equal tothe file size, it is required. The checkpoint offset set by thecheckpoint_count is always more thanchunk_size,unless the file size is less than or equal tochunk_size, then thecheckpoint_count will be zero, so it is optional.
By default, if the file size and file modification time of the source file is equal to the destination file, then ignorethe current file transfer. You can use theforce_checksum flag to force enable the checksum to compare whether thefile is equal or not.
The default checksum hash algorithm ismd5, you can use thechecksum_algorithm flag to change the default hashalgorithm, current supported algorithms:md5,sha1,sha256,sha512,crc32,crc64,adler32,fnv-1-32,fnv-1a-32,fnv-1-64,fnv-1a-64,fnv-1-128,fnv-1a-128.
If you want to reduce the frequency of synchronization, you can use thesync_delay flag to enable sync delay, startsync when the event count is equal or greater thansync_delay_events, or wait forsync_delay_time interval timesince the last sync.
And you can use theprogress flag to print the file sync progress bar.
$ gofs -source=./source -dest=./dest
You can useencrypt flag to enable encryption and specify a directory as an encryption workspace byencrypt_pathflag. All files in the directory will be encrypted then sync to the destination path.
$ gofs -source=./source -dest=./dest -encrypt -encrypt_path=./source/encrypt -encrypt_secret=mysecret_16bytes
You can use thedecrypt flag to decrypt the encryption files to a specified path.
$ gofs -decrypt -decrypt_path=./dest/encrypt -decrypt_secret=mysecret_16bytes -decrypt_out=./decrypt_out
Sync the whole path immediately from source directory to dest directory.
$ gofs -source=./source -dest=./dest -sync_once
Sync the whole path from source directory to dest directory with cron.
# Per 30 seconds sync the whole path from source directory to dest directory$ gofs -source=./source -dest=./dest -sync_cron="*/30 * * * * *"
Start a daemon to create subprocess to work, and record pid info to pid file.
$ gofs -source=./source -dest=./dest -daemon -daemon_pid
Start a file server for source directory and dest directory.
The file server is use HTTPS default, set thetls_cert_file andtls_key_file flags to customize the cert file andkey file.
You can disable the HTTPS by set thetls flag tofalse if you don't need it.
If you set thetls totrue, the file server default port is443, otherwise it is80, and you can customize thedefault port with theserver_addr flag, like-server_addr=":443".
If you enable thetls flag on the server side, you can control whether a client skip verifies the server's certificatechain and host name by thetls_insecure_skip_verify flag, default istrue.
If you already enable thetls flag, then you can use thehttp3 flag to enable the HTTP3 protocol in the server andclient sides.
You should set therand_user_count flag to auto generate some random users or set theusers flag to customize serverusers for security reasons.
The server users will output to log if you set therand_user_count flag greater than zero.
If you need to compress the files, add theserver_compress flag to enable gzip compression for response, but it is notfast now, and may reduce transmission efficiency in the LAN.
You can switch the session store mode for the file server bysession_connection flag,currently supports memory and redis, default is memory.If you want to use the redis as the session store, here is an example for redis session connection string:redis://127.0.0.1:6379?password=redis_password&db=10&max_idle=10&secret=redis_secret.
# Start a file server and create three random users# Replace the `tls_cert_file` and `tls_key_file` flags with your real cert files in the production environment$ gofs -source=./source -dest=./dest -server -tls_cert_file=cert.pem -tls_key_file=key.pem -rand_user_count=3
Use themax_tran_rate flag to limit the max transmission rate in the server and client sides,and this is an expected value, not an absolute one.
For example, limit the max transmission rate to 1048576 bytes, means 1MB.
$ gofs -source=./source -dest=./dest -max_tran_rate=1048576
Start a remote disk server as a remote file source.
Thesource flag detail seeRemote Server Source Protocol.
Pay attention to that remote disk server users must have read permission at least, forexample,-users="gofs|password|r".
You can use thecheckpoint_count andsync_delay flags like theLocal Disk.
# Start a remote disk server# Replace the `tls_cert_file` and `tls_key_file` flags with your real cert files in the production environment# Replace the `users` flag with complex username and password for security$ gofs -source="rs://127.0.0.1:8105?mode=server&local_sync_disabled=true&path=./source&fs_server=https://127.0.0.1" -dest=./dest -users="gofs|password|r" -tls_cert_file=cert.pem -tls_key_file=key.pem -token_secret=mysecret_16bytes
Start a remote disk client to sync change files from remote disk server.
Thesource flag detail seeRemote Server Source Protocol.
Use thesync_once flag to sync the whole path immediately from remote disk server to local dest directory,likeSync Once.
Use thesync_cron flag to sync the whole path from remote disk server to local dest directory with cron,likeSync Cron.
Use theforce_checksum flag to force enable the checksum to compare whether the file is equal or not,likeLocal Disk.
You can use thesync_delay flag like theLocal Disk.
# Start a remote disk client# Replace the `users` flag with your real username and password$ gofs -source="rs://127.0.0.1:8105" -dest=./dest -users="gofs|password" -tls_cert_file=cert.pem
Start aRemote Disk Server as a remote file source, then enable the remote push server withthepush_server flag.
Pay attention to that remote push server users must have read and write permission at least, forexample,-users="gofs|password|rw".
# Start a remote disk server and enable the remote push server# Replace the `tls_cert_file` and `tls_key_file` flags with your real cert files in the production environment# Replace the `users` flag with complex username and password for security$ gofs -source="rs://127.0.0.1:8105?mode=server&local_sync_disabled=true&path=./source&fs_server=https://127.0.0.1" -dest=./dest -users="gofs|password|rw" -tls_cert_file=cert.pem -tls_key_file=key.pem -push_server -token_secret=mysecret_16bytes
Start a remote push client to sync change files to theRemote Push Server.
Use thechunk_size flag to set the chunk size of the big file to upload. The default value ofchunk_sizeis1048576, which means1MB.
You can use thecheckpoint_count andsync_delay flags like theLocal Disk.
More flag usage seeRemote Disk Client.
# Start a remote push client and enable local disk sync, sync the file changes from source path to the local dest path and the remote push server# Replace the `users` flag with your real username and password$ gofs -source="./source" -dest="rs://127.0.0.1:8105?local_sync_disabled=false&path=./dest" -users="gofs|password" -tls_cert_file=cert.pem
Start a SFTP push client to sync change files to the SFTP server.
$ gofs -source="./source" -dest="sftp://127.0.0.1:22?local_sync_disabled=false&path=./dest&remote_path=/gofs_sftp_server&ssh_user=sftp_user&ssh_pass=sftp_pwd"
Start a SFTP pull client to pull the files from the SFTP server to the local destination path.
$ gofs -source="sftp://127.0.0.1:22?remote_path=/gofs_sftp_server&ssh_user=sftp_user&ssh_pass=sftp_pwd" -dest="./dest" -sync_once
Start a MinIO push client to sync change files to the MinIO server.
$ gofs -source="./source" -dest="minio://127.0.0.1:9000?secure=false&local_sync_disabled=false&path=./dest&remote_path=minio-bucket" -users="minio_user|minio_pwd"
Start a MinIO pull client to pull the files from the MinIO server to the local destination path.
$ gofs -source="minio://127.0.0.1:9000?secure=false&remote_path=minio-bucket" -dest="./dest" -users="minio_user|minio_pwd" -sync_once
Start a task server to distribute the tasks to clients.
Take theRemote Disk Server for example, create a task manifest config file liketheremote-disk-task.yaml file first.Here defined a task that synchronizes files from server.
Then create the task content configfilerun-gofs-remote-disk-client.yaml that defined in theabove manifest config file, and it will be executed by client.
Finally, start the remote disk server with thetask_conf flag.
Here useconf to simplify the command and reuse the integration test config files.
$cd integration$ mkdir -p rs/source rs/dest$ gofs -conf=./testdata/conf/run-gofs-remote-disk-server.yamlStart a task client to subscribe to the task server, then acquire the task and execute it.
Use thetask_client flag to start the task client, and thetask_client_max_worker flag will limit the maxconcurrent workers in the task client side.
And you can use thetask_client_labels flag to define the labels of the task client that use to match the task in thetask server side.
Here useconf to simplify the command and reuse the integration test config files.
$cd integration$ mkdir -p rc/source rc/dest$ gofs -conf=./testdata/conf/run-gofs-task-client.yamlIf you need to synchronize files between two devices that are unable to establish a direct connection, you can use areverse proxy as a relay server. In more detail, see alsoRelay.
The remote server source protocol is based on URI, seeRFC 3986.
The scheme name isrs.
The remote server source uses0.0.0.0 or other local ip address as host inRemote Disk Servermode, and use ip address or domain name as host inRemote Disk Client mode.
The remote server source port, default is8105.
Use the following parameters inRemote Disk Server mode only.
paththeRemote Disk Server actual local source directorymoderunning mode, inRemote Disk Server mode isserver, default is runninginRemote Disk Client modefs_serverFile Server address, likehttps://127.0.0.1local_sync_disableddisabledRemote Disk Server sync changes to its local dest path,trueorfalse, default isfalse
For example, inRemote Disk Server mode.
rs://127.0.0.1:8105?mode=server&local_sync_disabled=true&path=./source&fs_server=https://127.0.0.1 \_/ \_______/ \__/ \____________________________________________________________________________/ | | | |scheme host port parameterEnable manage api base onFile Server by using themanage flag.
By default, allow to access manage api by private address and loopback address only.
You can disable it by setting themanage_private flag tofalse.
$ gofs -source=./source -dest=./dest -server -tls_cert_file=cert.pem -tls_key_file=key.pem -rand_user_count=3 -manage
The pprof url address like this
https://127.0.0.1/manage/pprof/Reading the program config, default return the config withjson format, and supportjson andyaml formatcurrently.
https://127.0.0.1/manage/configOr use theformat parameter to specific the config format.
https://127.0.0.1/manage/config?format=yamlUse thereport flag to enable report api route, and start to collect the report data, need to enable themanage flagfirst.
The details of the report api seeReport API.
https://127.0.0.1/manage/reportEnable the file logger and console logger by default, and you can disable the file logger by setting thelog_file flagtofalse.
Use thelog_level flag to set the log level, default isINFO, (DEBUG=0INFO=1WARN=2ERROR=3).
Use thelog_dir flag to set the directory of the log file, default is./logs/.
Use thelog_flush flag to enable auto flush log with interval, default istrue.
Use thelog_flush_interval flag to set the log flush interval duration, default is3s.
Use thelog_event flag to enable the event log, write to file, default isfalse.
Use thelog_sample_rate flag to set the sample rate for the sample logger, and the value ranges from 0 to 1, defaultis1.
Use thelog_format flag to set the log output format, current supporttext andjson, default istext.
Use thelog_split_date flag to split log file by date, default isfalse.
# set the logger config in "Local Disk" mode$ gofs -source=./source -dest=./dest -log_file -log_level=0 -log_dir="./logs/" -log_flush -log_flush_interval=3s -log_event
If you want, you can use a configuration file to replace all the flags.It supportsjson andyaml format currently.
All the configuration fields are the same as the flags, you can refer to theConfiguration Exampleor the response ofConfig API.
$ gofs -conf=./gofs.yaml
You can use thechecksum flag to calculate the file checksum and print the result.
Thechunk_size,checkpoint_count andchecksum_algorithm flags are effective here the same as intheLocal Disk.
$ gofs -source=./gofs -checksum
$ gofs -h
$ gofs -v
$ gofs -about
Thegofs-webui is a web UI tool forgofs, and it allows you to generate theconfig file ofgofs through the web UI, making the use of gofs easier.
%%{init: { "flowchart": {"htmlLabels": false}} }%%flowchart TD PR[pull request] MainRepo[github.com/no-src/gofs.git] -- 1.fork --> ForkRepo[github.com/yourname/gofs.git] ForkRepo -- 2.git clone --> LocalRepo[local repository] LocalRepo -- 3.commit changes --> NewBranch[new branch] NewBranch -- 4.git push --> ForkRepo ForkRepo -- 5.create pull request --> PR PR -- 6.merge to --> MainRepoQuick to contribute using cloud development environments.
About
A cross-platform real-time file synchronization tool out of the box based on Golang
Topics
Resources
License
Code of conduct
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Uh oh!
There was an error while loading.Please reload this page.
Contributors5
Uh oh!
There was an error while loading.Please reload this page.