Install and Configure GlusterFS on CentOS 7 / RHEL 7
Configure GlusterFS on CentOS 7:
Before creating a volume, we need to create trusted storage pool by adding gluster2.itzgeek.local. You can run GlusterFS configuration commands on any one server in the cluster will execute the same command on all other servers.
Here I will run all GlusterFS commands in gluster1.itzgeek.local node.
[[email protected] ~]# gluster peer probe gluster2.itzgeek.local peer probe: success.
Verify the status of the trusted storage pool.
[[email protected] ~]# gluster peer status Number of Peers: 1 Hostname: gluster2.itzgeek.local Uuid: cc01e4c6-ffc6-44fa-a47b-033692f151df State: Peer in Cluster (Connected)
List the storage pool.
[[email protected] ~]# gluster pool list UUID Hostname State cc01e4c6-ffc6-44fa-a47b-033692f151df gluster2.itzgeek.local Connected 519b0fb8-549c-457c-b474-e6214794e02d localhost Connected
Setup GlusterFS Volume:
Create a brick (directory) called “gv0” in the mounted file system on both nodes.
mkdir -p /data/gluster/gv0
Since we are going to use replicated volume, so create the volume named “gv0” with two replicas.
[[email protected] ~]# gluster volume create gv0 replica 2 gluster1.itzgeek.local:/data/gluster/gv0 gluster2.itzgeek.local:/data/gluster/gv0 volume create: gv0: success: please start the volume to access data
Start the volume.
[[email protected] ~]# gluster volume start gv0 volume start: gv0: success
Check the status of the created volume.
[[email protected] ~]# gluster volume info gv0 Volume Name: gv0 Type: Replicate Volume ID: c3968489-098d-4664-8b25-54827f244fbe Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: gluster1.itzgeek.local:/data/gluster/gv0 Brick2: gluster2.itzgeek.local:/data/gluster/gv0 Options Reconfigured: transport.address-family: inet performance.readdir-ahead: on nfs.disable: on
Setup GlusterFS Client:
Install glusterfs-client package to support the mounting of GlusterFS filesystems. Run all commands as root user.
$ su - ### CentOS / RHEL ### yum install -y glusterfs-client ### Ubuntu / Debian ### apt-get install -y glusterfs-client
Create a directory to mount the GlusterFS filesystem.
mkdir -p /mnt/glusterfs
Now, mount the GlusterFS filesystem to /mnt/glusterfs using the following command.
mount -t glusterfs gluster1.itzgeek.local:/gv0 /mnt/glusterfs
If you get any error like below.
WARNING: getfattr not found, certain checks will be skipped.. Mount failed. Please check the log file for more details.
Consider adding Firewall rules for client machine (client.itzgeek.local) to allow connections on the gluster nodes (gluster1.itzgeek.local and gluster2.itzgeek.local). Run the below command on both gluster nodes.
firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="clientip" accept'
You can also use gluster2.itzgeek.local instead of gluster1.itzgeek.com in the above command.
Verify the mounted GlusterFS filesystem.
[email protected]:~# df -hP /mnt/glusterfs Filesystem Size Used Avail Use% Mounted on gluster1.itzgeek.local:/gv0 4.8G 21M 4.6G 1% /mnt/glusterfs
You can also use below command to verify the GlusterFS filesystem.
[email protected]:~# cat /proc/mounts sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 udev /dev devtmpfs rw,nosuid,relatime,size=480040k,nr_inodes=120010,mode=755 0 0 devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=99844k,mode=755 0 0 /dev/mapper/server--vg-root / ext4 rw,relatime,errors=remount-ro,data=ordered 0 0 securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0 tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0 tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0 tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0 cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd 0 0 pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0 cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0 cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0 cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0 cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0 cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0 cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0 cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0 cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0 cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0 cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0 systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=22,pgrp=1,timeout=0,minproto=5,maxproto=5,direct 0 0 debugfs /sys/kernel/debug debugfs rw,relatime 0 0 mqueue /dev/mqueue mqueue rw,relatime 0 0 hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0 fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0 /dev/sda1 /boot ext2 rw,relatime,block_validity,barrier,user_xattr,acl 0 0 tmpfs /run/lxcfs/controllers tmpfs rw,relatime,size=100k,mode=700 0 0 devices /run/lxcfs/controllers/devices cgroup rw,relatime,devices 0 0 pids /run/lxcfs/controllers/pids cgroup rw,relatime,pids 0 0 perf_event /run/lxcfs/controllers/perf_event cgroup rw,relatime,perf_event 0 0 memory /run/lxcfs/controllers/memory cgroup rw,relatime,memory 0 0 freezer /run/lxcfs/controllers/freezer cgroup rw,relatime,freezer 0 0 net_cls,net_prio /run/lxcfs/controllers/net_cls,net_prio cgroup rw,relatime,net_cls,net_prio 0 0 hugetlb /run/lxcfs/controllers/hugetlb cgroup rw,relatime,hugetlb 0 0 cpu,cpuacct /run/lxcfs/controllers/cpu,cpuacct cgroup rw,relatime,cpu,cpuacct 0 0 cpuset /run/lxcfs/controllers/cpuset cgroup rw,relatime,cpuset 0 0 blkio /run/lxcfs/controllers/blkio cgroup rw,relatime,blkio 0 0 name=systemd /run/lxcfs/controllers/name=systemd cgroup rw,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd 0 0 lxcfs /var/lib/lxcfs fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0 tmpfs /run/user/1000 tmpfs rw,nosuid,nodev,relatime,size=99844k,mode=700,uid=1000,gid=1000 0 0 gluster1.itzgeek.local:/gv0 /mnt/glusterfs fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0
Add below entry to /etc/fstab for automatically mounting during system boot.
gluster1.itzgeek.local:/gv0 /mnt/glusterfs glusterfs defaults,_netdev 0 0
Test GlusterFS Replication and High-Availability:
GlusterFS Server Side:
To check the replication, mount the created GlusterFS volume on the same storage node.
[[email protected] ~]# mount -t glusterfs gluster2.itzgeek.local:/gv0 /mnt [[email protected] ~]# mount -t glusterfs gluster1.itzgeek.local:/gv0 /mnt
Data inside the /mnt directory of both nodes will always be same (replication).
GlusterFS Client Side:
Let’s create some files on the mounted filesystem on the client.itzgeek.local.
touch /mnt/glusterfs/file1 touch /mnt/glusterfs/file2
Verify the created files.
[email protected]:~# ls -l /mnt/glusterfs/ total 0 -rw-r--r-- 1 root root 0 Sep 28 05:23 file1 -rw-r--r-- 1 root root 0 Sep 28 05:23 file2
Test the both GlusterFS nodes whether they have same data inside /mnt.
[[email protected] ~]# ls -l /mnt/ total 0 -rw-r--r--. 1 root root 0 Sep 27 2016 file1 -rw-r--r--. 1 root root 0 Sep 27 2016 file2 [[email protected] ~]# ls -l /mnt/ total 0 -rw-r--r--. 1 root root 0 Sep 27 16:53 file1 -rw-r--r--. 1 root root 0 Sep 27 16:53 file2
As you know, we have mounted GlusterFS volume from gluster1.itzgeek.local on client.itzgeek.local, now it is the time to test the high-availability of the volume by shutting down the node.
[[email protected] ~]# poweroff
Now test the availability of the files, you would see files that we created recently even though the node is down.
[email protected]:~# ls -l /mnt/glusterfs/ total 0 -rw-r--r-- 1 root root 0 Sep 28 05:23 file1 -rw-r--r-- 1 root root 0 Sep 28 05:23 file2
Create some more files on the GlusterFS filesystem to check the replication.
touch /mnt/glusterfs/file3 touch /mnt/glusterfs/file4
Verify the files count.
[email protected]:~# ls -l /mnt/glusterfs/ total 0 -rw-r--r-- 1 root root 0 Sep 28 05:23 file1 -rw-r--r-- 1 root root 0 Sep 28 05:23 file2 -rw-r--r-- 1 root root 0 Sep 28 05:28 file3 -rw-r--r-- 1 root root 0 Sep 28 05:28 file4
Since the gluster1 is down, all your data’s are now written on gluster2.itzgeek.local due to High-Availability. Now power on the node1 (gluster1.itzgeek.local).
Check the /mnt of the gluster1.itzgeekk.local; you should see all four files in the directory, this confirms the replication is working as expected.
[[email protected] ~]# mount -t glusterfs gluster1.itzgeek.local:/gv0 /mnt [[email protected] ~]# ls -l /mnt/ total 0 -rw-r--r--. 1 root root 0 Sep 27 19:53 file1 -rw-r--r--. 1 root root 0 Sep 27 19:53 file2 -rw-r--r--. 1 root root 0 Sep 27 19:58 file3 -rw-r--r--. 1 root root 0 Sep 27 19:58 file4
That’s All.