Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Various experiments with PostgreSQL clustering

License

NotificationsYou must be signed in to change notification settings

m99coder/postgres_cluster

 
 

Repository files navigation

Various experiments with PostgreSQL clustering perfomed at PostgresPro.

This is mirror of postgres repo with several changes to the core and few extra extensions.

Core changes:

New extensions:

  • pg_dtm. Transaction management by interaction with standalone coordinator (Arbiter or dtmd). [[https://wiki.postgresql.org/wiki/DTM#DTM_approach]]
  • pg_tsdtm. Coordinator-less transaction management by tracking commit timestamps.
  • multimaster. Synchronous multi-master replication based on logical_decoding and pg_dtm.

Changed extension:

  • postgres_fdw. Added support of pg_dtm.

Deploying

For deploy and test postgres over a cluster we use ansible. In each extension directory one can find test subdirectory where we are storing tests and deploy scripts.

Running tests on local cluster

To use it one need ansible hosts file with following groups:

farms/cluster.example:

[clients] # benchmark will start simultaneously on that nodesserver0.example.com[nodes] # all that nodes will run postgres, dtmd/master will be deployed to firstserver1.example.comserver2.example.comserver3.example.com

After you have proper hosts file you can deploy all stuff to servers:

# cd pg_dtm/tests# ansible-playbook -i farms/sai deploy_layouts/cluster.yml

To perform dtmbench run:

# ansible-playbook -i farms/sai perf.yml -e nnodes=3 -e nconns=100

here nnodes is number of nudes that will be used for that test, nconns is thenumber of connections to the backend.

Running tests on Amazon ec2

In the case of amazon cloud there is no need in specific hosts file. Instead of itwe use script farms/ec2.py to get current instances running on you account. To usethat script you need to specify you account key and access_key in ~/.boto.cfg (or inany other place that described athttp://boto.cloudhackers.com/en/latest/boto_config_tut.html)

To create VMs in cloud run:

# ansible-playbook -i farms/ec2.py deploy_layouts/ec2.yml

After that you should wait few minutes to have info about that instances in Amazon API. Afterthat you can deploy postgres as usual:

# ansible-playbook -i farms/ec2.py deploy_layouts/cluster-ec2.yml

And to run a benchmark:

# ansible-playbook -i farms/sai perf-ec2.yml -e nnodes=3 -e nconns=100

About

Various experiments with PostgreSQL clustering

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C85.1%
  • PLpgSQL5.3%
  • C++3.1%
  • Perl1.5%
  • Yacc1.4%
  • Makefile0.8%
  • Other2.8%

[8]ページ先頭

©2009-2025 Movatter.jp