Movatterモバイル変換


[0]ホーム

URL:


Hazelcast Logo
Hazelcast Platform
5.5

Partition Group Configuration

6.0-SNAPSHOT5.55.45.35.25.15.0

Hazelcast distributes key objects into partitions using the consistenthashing algorithm. Multiple replicas are created for each partition andthose partition replicas are distributed among Hazelcast members. An entryis stored in the members that own replicas of the partition to which the entry’skey is assigned. The total partition count is 271 by default; you can change itwith the configuration propertyhazelcast.partition.count. See theSystem Properties appendix.

Hazelcast member that owns the primary replica of a partition is called asthe partition owner. Other replicas are called backups. Based on the configuration,a key object can be kept in multiple replicas of a partition. A member can hold atmost one replica of a partition (ownership or backup).

By default, Hazelcast distributes partition replicas randomly and equally amongthe cluster members, assuming all members in the cluster are identical. But whatif some members share the same JVM or physical machine or chassis and you wantbackups of these members to be assigned to members in another machine or chassis?What if processing or memory capacities of some members are different and youdo not want an equal number of partitions to be assigned to all members?

To deal with such scenarios, you can group members in the same JVM (or physical machine)or members located in the same chassis. Or you can group members to create identicalcapacity. We call these groupspartition groups. Partitions are assigned to thosepartition groups instead of individual members. Backup replicas of a partition which isowned by a partition group are located in other partition groups.

Grouping Types

When you enable partition grouping, Hazelcast presents the following choicesfor you to configure partition groups.

HOST_AWARE

You can group members automatically using the IP addresses of members, so memberssharing the same network interface are grouped together. All members on the samehost (IP address or domain name) form a single partition group. This helps to avoiddata loss when a physical server crashes, because multiple replicas of the samepartition are not stored on the same host. But if there are multiple networkinterfaces or domain names per physical machine, this assumption is invalid.

The following are declarative and programmatic configuration snippets thatshow how to enableHOST_AWARE grouping:

  • XML

  • YAML

  • Java

<hazelcast>    <partition-group enabled="true" group-type="HOST_AWARE"/></hazelcast>
hazelcast:  partition-group:    enabled: true    group-type: HOST_AWARE
Config config = ...;PartitionGroupConfig partitionGroupConfig = config.getPartitionGroupConfig();partitionGroupConfig.setEnabled( true )    .setGroupType( MemberGroupType.HOST_AWARE );

ZONE_AWARE

You can useZONE_AWARE configuration when you want to back up data on different Availability Zones (AZ) in the following environments:

These services write zone information to the Hazelcastmember attributes map during the discovery process.WhenZONE_AWARE is configured as partition group type, Hazelcast creates the partitiongroups with respect to member attributes map entries that include zone information.That means backups are created in the other zones and each zone is accepted as one partition group.

When using theZONE_AWARE partition grouping, a Hazelcast cluster spanningmultiple AZs should have an equal number of members in each AZ. Otherwise,it results in uneven partition distribution among the members.

The following are declarative and programmatic configuration snippetsthat show how to enableZONE_AWARE grouping:

  • XML

  • YAML

  • Java

<hazelcast>    <partition-group enabled="true" group-type="ZONE_AWARE" /></hazelcast>
hazelcast:  partition-group:    enabled: true    group-type: ZONE_AWARE
Config config = ...;PartitionGroupConfig partitionGroupConfig = config.getPartitionGroupConfig();partitionGroupConfig.setEnabled( true )    .setGroupType( MemberGroupType.ZONE_AWARE );

PLACEMENT_AWARE

You can group members according to their placement metadata provided by thecloud providers. This metadata indicates the placement information, such as rack,fault domain, power sources, network, and resources of a virtual machine in a zone.

This grouping provides a finer granularity thanZONE_AWAREand is useful for good redundancy when running members within asingle availability zone; it provides availability within a single zone as high as possibleby spreading the partitions and their replicas across different racks.

This grouping is currently supported inHazelcast AWS Discovery plugin.See also AWS'documentation on placement groups for more information.

The following are declarative and programmatic configuration snippetsthat show how to enablePLACEMENT_AWARE grouping:

  • XML

  • YAML

  • Java

<hazelcast>    <partition-group enabled="true" group-type="PLACEMENT_AWARE" /></hazelcast>
hazelcast:  partition-group:    enabled: true    group-type: PLACEMENT_AWARE
Config config = ...;PartitionGroupConfig partitionGroupConfig = config.getPartitionGroupConfig();partitionGroupConfig.setEnabled( true )    .setGroupType( MemberGroupType.PLACEMENT_AWARE );

NODE_AWARE

You can useNODE_AWARE configuration when you want to back up data on different pods inKubernetes environments.

WhenNODE_AWARE is configured as the partition group type,Hazelcast creates the partition groups with respect to member attributes map’s entriesthat include the node information. That means backups are createdin the other nodes and each node is accepted as one partition group.

Hazelcast writes zone information to the Hazelcastmember attributes map during the discovery process.

When using theNODE_AWARE partition grouping, the orchestrationtool must distribute Hazelcast containers/pods equallybetween the nodes. Otherwise, it results in uneven partitiondistribution among the members.

The following are declarative and programmatic configuration snippets that show how to enableNODE_AWARE grouping:

  • XML

  • YAML

  • Java

<hazelcast>    <partition-group enabled="true" group-type="NODE_AWARE" /></hazelcast>
hazelcast:  partition-group:    enabled: true    group-type: NODE_AWARE
Config config = ...;PartitionGroupConfig partitionGroupConfig = config.getPartitionGroupConfig();partitionGroupConfig.setEnabled( true )    .setGroupType( MemberGroupType.NODE_AWARE );

PER_MEMBER

You can give every member its own group. Each member is a group of its ownand primary and backup partitions are distributed randomly (not on the samephysical member). This gives the least amount of protection and is the defaultconfiguration for a Hazelcast cluster. This grouping type provides good redundancywhen Hazelcast members are on separate hosts. However, if multiple instancesrun on the same host, this type is not a good option.

The following are declarative and programmatic configuration snippets thatshow how to enablePER_MEMBER grouping:

  • XML

  • YAML

  • Java

<hazelcast>    <partition-group enabled="true" group-type="PER_MEMBER" /></hazelcast>
hazelcast:  partition-group:    enabled: true    group-type: PER_MEMBER
Config config = ...;PartitionGroupConfig partitionGroupConfig = config.getPartitionGroupConfig();partitionGroupConfig.setEnabled( true )    .setGroupType( MemberGroupType.PER_MEMBER );

CUSTOM

You can do custom grouping using Hazelcast’s interface matching configuration.This way, you can add different and multiple interfaces to a group. You can alsouse wildcards in the interface addresses. For example, the users can create rack-awareor data warehouse partition groups using custom partition grouping.

The following are declarative and programmatic configuration examples that showhow to enable and useCUSTOM grouping:

  • XML

  • YAML

  • Java

<hazelcast>    <partition-group enabled="true" group-type="CUSTOM">        <member-group>            <interface>10.10.0.*</interface>            <interface>10.10.3.*</interface>            <interface>10.10.5.*</interface>        </member-group>        <member-group>            <interface>10.10.10.10-100</interface>            <interface>10.10.1.*</interface>            <interface>10.10.2.*</interface>        </member-group>    </partition-group></hazelcast>
hazelcast:  partition-group:    enabled: true    group-type: CUSTOM    member-group:      - - 10.10.0.*        - 10.10.3.*        - 10.10.5.*      - - 10.10.10.10-100        - 10.10.1.*        - 10.10.2.*
        Config config = new Config();        PartitionGroupConfig partitionGroupConfig = config.getPartitionGroupConfig();        partitionGroupConfig.setEnabled( true )                .setGroupType( PartitionGroupConfig.MemberGroupType.CUSTOM );        MemberGroupConfig memberGroupConfig = new MemberGroupConfig();        memberGroupConfig.addInterface( "10.10.0.*" )                .addInterface( "10.10.3.*" ).addInterface("10.10.5.*" );        MemberGroupConfig memberGroupConfig2 = new MemberGroupConfig();        memberGroupConfig2.addInterface( "10.10.10.10-100" )                .addInterface( "10.10.1.*").addInterface( "10.10.2.*" );        partitionGroupConfig.addMemberGroupConfig( memberGroupConfig );        partitionGroupConfig.addMemberGroupConfig( memberGroupConfig2 );
While your cluster was forming, if you configured your members todiscover each other by their IP addresses, you should use the IP addressesfor the<interface> element. If your members discovered each other by theirhost names, you should use host names.

SPI

You can provide your own partition group implementation using the SPI configuration.To create your partition group implementation, you need to first extend theDiscoveryStrategy class of the discovery service plugin, override the methodpublic PartitionGroupStrategy getPartitionGroupStrategy() and return thePartitionGroupStrategyconfiguration in that overridden method.

The following code covers the implementation steps mentioned in the above paragraph:

public class CustomDiscovery extends AbstractDiscoveryStrategy {    public CustomDiscovery(ILogger logger, Map<String, Comparable> properties) {        super(logger, properties);    }    @Override    public Iterable<DiscoveryNode> discoverNodes() {        Iterable<DiscoveryNode> iterable = //your implementation        return iterable;    }    @Override    public PartitionGroupStrategy getPartitionGroupStrategy() {        return new CustomPartitionGroupStrategy();    }    private class CustomPartitionGroupStrategy implements PartitionGroupStrategy {        @Override        public Iterable<MemberGroup> getMemberGroups() {            Iterable<MemberGroup> iterable = //your implementation            return iterable;        }    }}
Edit this page Report an issue
Was this helpful?

Thank you for your feedback.

×

Send us your feedback

Thank you for helping us improve Hazelcast documentation.

lifebuoy

Help and support

Report an issue Ask Community Online Training Contact Support

[8]ページ先頭

©2009-2025 Movatter.jp