- Notifications
You must be signed in to change notification settings - Fork0
Various experiments with PostgreSQL clustering
License
m99coder/postgres_cluster
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Various experiments with PostgreSQL clustering perfomed at PostgresPro.
This is mirror of postgres repo with several changes to the core and few extra extensions.
- Transaction manager interface (eXtensible Transaction Manager, xtm). Generic interface to plug distributed transaction engines. More info at [[https://wiki.postgresql.org/wiki/DTM]] and [[http://www.postgresql.org/message-id/flat/F2766B97-555D-424F-B29F-E0CA0F6D1D74@postgrespro.ru]].
- Distributed deadlock detection API.
- Fast 2pc patch. More info at [[http://www.postgresql.org/message-id/flat/74355FCF-AADC-4E51-850B-47AF59E0B215@postgrespro.ru]]
- pg_dtm. Transaction management by interaction with standalone coordinator (Arbiter or dtmd). [[https://wiki.postgresql.org/wiki/DTM#DTM_approach]]
- pg_tsdtm. Coordinator-less transaction management by tracking commit timestamps.
- multimaster. Synchronous multi-master replication based on logical_decoding and pg_dtm.
- postgres_fdw. Added support of pg_dtm.
For deploy and test postgres over a cluster we use ansible. In each extension directory one can find test subdirectory where we are storing tests and deploy scripts.
To use it one need ansible hosts file with following groups:
farms/cluster.example:
[clients] # benchmark will start simultaneously on that nodesserver0.example.com[nodes] # all that nodes will run postgres, dtmd/master will be deployed to firstserver1.example.comserver2.example.comserver3.example.com
After you have proper hosts file you can deploy all stuff to servers:
# cd pg_dtm/tests# ansible-playbook -i farms/sai deploy_layouts/cluster.yml
To perform dtmbench run:
# ansible-playbook -i farms/sai perf.yml -e nnodes=3 -e nconns=100
here nnodes is number of nudes that will be used for that test, nconns is thenumber of connections to the backend.
In the case of amazon cloud there is no need in specific hosts file. Instead of itwe use script farms/ec2.py to get current instances running on you account. To usethat script you need to specify you account key and access_key in ~/.boto.cfg (or inany other place that described athttp://boto.cloudhackers.com/en/latest/boto_config_tut.html)
To create VMs in cloud run:
# ansible-playbook -i farms/ec2.py deploy_layouts/ec2.yml
After that you should wait few minutes to have info about that instances in Amazon API. Afterthat you can deploy postgres as usual:
# ansible-playbook -i farms/ec2.py deploy_layouts/cluster-ec2.yml
And to run a benchmark:
# ansible-playbook -i farms/sai perf-ec2.yml -e nnodes=3 -e nconns=100
About
Various experiments with PostgreSQL clustering
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Languages
- C85.1%
- PLpgSQL5.3%
- C++3.1%
- Perl1.5%
- Yacc1.4%
- Makefile0.8%
- Other2.8%