Redhat Cluster - Interview Questions and Answers: Part 1



1. What is CMAN
  • Basically, cluster manager is a component of the cluster project that handles communications between nodes in the cluster.
  • CMAN is Cluster Manager. It manages cluster quorum and cluster membership.
  • CMAN runs on each node of a cluster

2. What is RGManager

RGManager manages and provides failover capabilities for collections of cluster resources called services, resource groups, or resource trees.

In the event of a node failure, RGManager will relocate the clustered service to another node with minimal service disruption. You can also restrict services to certain nodes, such as restricting  httpd to one group of nodes while  mysql can be restricted to a separate set of nodes.

When the cluster membership changes, openais tells the cluster that it needs to recheck it’s resources. This causes rgmanager, the resource group manager, to run. It will examine what changed and then will start, stop, migrate or recover cluster resources as needed.
Within rgmanager, one or more resources are brought together as a service. This service is then optionally assigned to a failover domain, an subset of nodes that can have preferential ordering.


3. What is Cluster Quorum


  • Quorum is a voting algorithm used by CMAN.
  • CMAN keeps a track of cluster quorum by monitoring the count of number of nodes in cluster.
  • If more than half of members of a cluster are in active state, the cluster is said to be in Quorum
  • If half or less than half of the members are not active, the cluster is said to be down and all cluster activities will be stopped
  • Quorum is defined as the minimum set of hosts required in order to provide service and is used to prevent split-brain situations.
  • The quorum algorithm used by the RHCS cluster is called “simple majority quorum”, which means that more than half of the hosts must be online and communicating 
  • in order to provide service.
4.  What is split-brain

It is a condition where two instances of the same cluster are running and trying to access same resource at the same time, resulting in corrupted cluster integrity
Cluster must maintain quorum to prevent split-brain issues

It's necessary for a cluster to maintain quorum to prevent 'split-brain' problems. If we didn't enforce quorum, a communication error on that same thirteen-node cluster
may cause a situation where six nodes are operating on the shared disk, and another six were also operating on it, independently. Because of the communication error,
the two partial-clusters would overwrite areas of the disk and corrupt the file system. With quorum rules enforced, only one of the partial clusters can use the shared storage, thus protecting data integrity.

Quorum doesn't prevent split-brain situations, but it does decide who is dominant and allowed to function in the cluster. Should split-brain occur, quorum prevents more than one cluster group from doing anything.


5. What is Fencing


  • Fencing is the disconnection of a node from the cluster’s shared storage. Fencing cuts off I/O from shared storage, thus ensuring data integrity. The cluster infrastructure performs fencing through the fence daemon,  fenced.
  • Power fencing — A fencing method that uses a power controller to power off an inoperable node.
  • storage fencing — A fencing method that disables the Fibre Channel port that connects storage to an inoperable node.
  • Other fencing — Several other fencing methods that disable I/O or power of an inoperable node, including IBM Bladecenters, PAP, DRAC/MC, HP ILO, IPMI, IBM RSA II, and others.

6. What is Quorum disk


  • In case of a 2 node cluster, quorum disk acts as a tie-breaker and prevents split-brain issue
  • If a node has access to network and quorum disk, it is active
  • If a node has lost access to network or quorum disk, it is inactive and can be fenced
  • A Quorum disk, known as a qdisk is small partition on SAN storage used to enhance quorum. It generally carries enough votes to allow even a single node to take quorum during a cluster partition. 
  • It does this by using configured heuristics, that is custom tests, to decided which which node or partition is best suited for providing clustered services during a cluster reconfiguration.

7. How to set up a quorum disk/partition?

Note that if you configure a quorum disk/partition, you don't want two_node="1" or expected_votes="2" since the quorum disk solves the voting imbalance. 

You want two_node="0" and expected_votes="3" (or nodes + 1 if it's not a two-node cluster). However, since 0 is the default value for two_node, you don't need to specify it at all. 

If this is an existing two-node cluster and you're changing the two_node value from "1" to "0", you'll have to stop the entire cluster and restart it after the configuration is changed (normally, the cluster doesn't have to be stopped and restarted for configuration changes, but two_node is a special case.) Basically, you want something like this in your /etc/cluster/cluster.conf:

  <cman two_node="0" expected_votes="3" .../>
    <clusternodes>
       <clusternode name="node1" votes="1" .../>
       <clusternode name="node2" votes="1" .../>
    </clusternodes>
  <quorumd device="/dev/mapper/lun01" votes="1"/>

Note: You don't have to use a disk or partition to prevent two-node fence-cycles; you can also set your cluster up this way. 

You can set up a number of different heuristics for the qdisk daemon. For example, you can set up a redundant NIC with a crossover cable and use ping operations to the local router/switch to break the tie (this is typical, actually, and is called an IP tie breaker). 
A heuristic can be made to check anything, as long as it is a shared resource.

8 .What can cause a node to leave the cluster?

A node may leave the cluster for many reasons. Among them:


  1. Shutdown: cman_tool leave was run on this node
  2. Killed by another node. The node was killed with either by cman_tool kill or qdisk.
  3. Panic: cman failed to allocate memory for a critical data structure or some other very bad internal failure.
  4. Removed: Like 1, but the remainder of the cluster can adjust quorum downwards to keep working.
  5. Membership Rejected: The node attempted to join a cluster but it's
  6. cluster.conf file did not match that of the other nodes. To find the real reason for this you need to examine the syslog of all the valid cluster members to find out why it was rejected.
  7. Inconsistent cluster view: This is usually indicative of a bug but it can also happen if the network is extremely unreliable.
  8. Missed too many heartbeats: This means what it says. All nodes are expected to broadcast a heartbeat every 5 seconds (by default). If none is received within

9 . How can I define a two-node cluster if a majority is needed to reach quorum?

We had to allow two-node clusters, so we made a special exception to the quorum rules. There is a special setting "two_node" in the /etc/cluster.conf file that looks like this:

<cman expected_votes="1" two_node="1"/>

This will allow one node to be considered enough to establish a quorum. Note that if you configure a quorum disk/partition, you don't want two_node="1".


10. What is the best two-node network & fencing configuration?

In a two node cluster (where you are using two_node="1" in the cluster configuration, and w/o QDisk), there are several considerations you need to be aware of:

If you are using per-node power management of any sort where the device is not shared between cluster nodes, it must be connected to the same network used by CMAN for cluster communication. Failure to do so can result in both nodes simultaneously fencing each other, leaving the entire cluster dead, or end up in a fence loop. Typically, this includes all integrated power management solutions (iLO, IPMI, RSA, ERA, IBM Blade Center, Egenera Blade Frame, Dell DRAC, etc.), but also includes remote power switches (APC, WTI) if the devices are not shared between the two nodes.

It is best to use power-type fencing. SAN or SCSI-reservation fencing might work, as long as it meets the above requirements. If it does not, you should consider using a quorum disk or partition

If you can not meet the above requirements, you can use quorum disk or partition.


0 comments:

Post a Comment

Blogger Tips and TricksLatest Tips And TricksBlogger Tricks