fence_virt.conf(5) | File Formats Manual | fence_virt.conf(5) |
fence_virt.conf - configuration file for fence_virtd
The fence_virt.conf file contains configuration information for fence_virtd, a fencing request routing daemon for clusters of virtual machines.
The file is tree-structured. There are parent/child relationships and sibling relationships between the nodes.
foo {
bar {
baz = "1";
}
}
There are three primary sections of fence_virt.conf.
This section contains global information about how fence_virtd is to operate. The most important pieces of information are as follows:
This section contains listener-specific configuration information; see the section about listeners below.
This section contains listener-specific configuration information; see the section about listeners below.
This section contains static maps of which virtual machines may fence which other virtual machines; see the section about groups below.
There are various listeners available for fence_virtd, each one handles decoding and authentication of a given fencing request. The following configuration blocks belong in the listeners section of fence_virt.conf
The serial listener plugin utilizes libvirt's serial (or VMChannel) mapping to listen for requests. When using the serial listener, it is necessary to add a serial port (preferably pointing to /dev/ttyS1) or a channel (preferably pointing to 10.0.2.179:1229) to the libvirt domain description. Note that only type unix , mode bind serial ports and channels are supported and each VM should have a separate unique socket. Example libvirt XML:
<serial type='unix'>
<source mode='bind' path='/sandbox/guests/fence_socket_molly'/>
<target port='1'/>
</serial>
<channel type='unix'>
<source mode='bind' path='/sandbox/guests/fence_molly_vmchannel'/>
<target type='guestfwd' address='10.0.2.179' port='1229'/>
</channel>
The tcp listener operates similarly to the multicast listener but uses TCP sockets for communication instead of using multicast packets.
The vsock listener operates similarly to the multicast listener but uses virtual machine sockets (AF_VSOCK) for communication instead of using multicast packets.
There are various backends available for fence_virtd, each one handles routing a fencing request to a hypervisor or management tool. The following configuration blocks belong in the backends section of fence_virt.conf
The libvirt plugin is the simplest plugin. It is used in environments where routing fencing requests between multiple hosts is not required, for example by a user running a cluster of virtual machines on a single desktop computer.
All libvirt URIs are accepted and passed as-is.
See https://libvirt.org/uri.html#remote-uris for examples.
NOTE: When VMs are run as non-root user the socket path must be set as part of the URI.
Example: qemu:///session?socket=/run/user/<UID>/libvirt/virtqemud-sock
The cpg plugin uses corosync CPG and libvirt to track virtual machines and route fencing requests to the appropriate computer.
Fence_virtd supports static maps which allow grouping of VMs. The groups are arbitrary and are checked at fence time. Any member of a group may fence any other member. Hosts may be assigned to multiple groups if desired.
This defines a group.
fence_virtd {
listener = "multicast";
backend = "cpg";
}
# this is the listeners section
listeners {
multicast {
key_file = "/etc/cluster/fence_xvm.key";
}
}
backends {
libvirt {
uri = "qemu:///system";
}
}
groups {
group {
name = "cluster1";
ip = "192.168.1.1";
ip = "192.168.1.2";
uuid = "44179d3f-6c63-474f-a212-20c8b4b25b16";
uuid = "1ce02c4b-dfa1-42cb-b5b1-f0b1091ece60";
uuid = "node1";
uuid = "node2";
}
}
fence_virtd(8), fence_virt(8), fence_xvm(8), fence(8)