Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
This repository was archived by the owner on Mar 10, 2019. It is now read-only.
/yokePublic archive

Postgres high-availability cluster with auto-failover and automated cluster recovery.

License

NotificationsYou must be signed in to change notification settings

nanopack/yoke

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

yoke logoBuild Status

Yoke is a Postgres redundancy/auto-failover solution that provides a high-availability PostgreSQL cluster that's simple to manage.

Requirements

Yoke has the following requirements/dependencies to run:

  • A 3-server cluster consisting of a 'primary', 'secondary', and 'monitor' node
  • 'primary' & 'secondary' nodes need ssh connections between each other (w/o passwords)
  • 'primary' & 'secondary' nodes need rsync (or some alternative sync_command) installed
  • 'primary' & 'secondary' nodes should have postgres installed under a postgres user, and in thepath. Yoke tries calling 'postgres' and 'pg_ctl'
  • 'primary' & 'secondary' nodes run postgres as a child process so it should not be started independently

Each node in the cluster requires its own config.ini file with the following options (provided values are defaults):

[config]# the IP which this node will broadcast to other nodesadvertise_ip=# the port which this node will broadcast to other nodesadvertise_port=4400# the directory where postgresql was installeddata_dir=/data# delay before node decides what to do with postgresql instancedecision_timeout=30# log verbosity (trace, debug, info, warn error, fatal)log_level=warn# REQUIRED - the IP:port combination of all nodes that are to be in the cluster (e.g. 'role=m.y.i.p:4400')primary=secondary=monitor=# SmartOS REQUIRED - either 'primary', 'secondary', or 'monitor' (the cluster needs exactly one of each)role=# the postgresql portpg_port=5432# the directory where node status information is storedstatus_dir=./status# the command you would like to use to sync the data from this node to the other when this node is mastersync_command=rsync -ae"ssh -o StrictHostKeyChecking=no" --delete {{local_dir}} {{slave_ip}}:{{slave_dir}}[vip]# Virtual Ip you would like to useip=# Command to use when adding the vip. This will be called as {{add_command}} {{vip}}add_command=# Command to use when removing the vip. This will be called as {{remove_command}} {{vip}}remove_command=[role_change]# When this nodes role changes we will call the command with the new role as its arguement '{{command}} {{(master|slave|single}))'command=

Startup

Once all configurations are in place, start yoke by running:

./yoke ./primary.ini

Note: The ini file can be named anything and reside anywhere. All Yoke needs is the /path/to/config.ini on startup.

Yoke CLI - yokeadm

Yoke comes with its own CLI, yokeadm, that allows for limited introspection into the cluster.

Building the CLI:

cd ./yokeadmgo build./yokeadm
Usage:
yokeadm (<COMMAND>:<ACTION> OR <ALIAS>) [GLOBAL FLAG] <POSITIONAL> [SUB FLAGS]
Available Commands:
  • list : Returns status information for all nodes in the cluster
  • demote : Advises a node to demote

Documentation

Complete documentation is available ongodoc.

Licence

Mozilla Public License Version 2.0

open source


[8]ページ先頭

©2009-2025 Movatter.jp