Skip to main content
If the GlusterFS mountpoint at /data is not accessible, follow this comprehensive recovery process.

Step 1: Recover

The primary recovery step that uses your fstab configuration:
cd / && umount -l /data || true; mount /data
What this does:
  • Changes to root directory (cd /)
  • Lazy unmounts /data if mounted (umount -l /data || true)
  • Mounts /data using your fstab configuration (mount /data)
Fstab Configuration: Your fstab is configured with primary and backup servers:
cat /etc/fstab
Expected output:
n01.modms:/gv0 /data glusterfs defaults,_netdev,backupvolfile-server=n02.modms 0 0
This configuration:
  • Uses n01.modms as the primary volfile server
  • Falls back to n02.modms if the primary is unavailable
  • Sets _netdev to ensure network is available before mounting

Step 2: Verify

After recovery, verify the mountpoint is working correctly:
findmnt -T /data

Verification Criteria

Mount Source (findmnt -T /data):
  • OK if SOURCE shows n01.modms:/gv0 or n02.modms:/gv0
  • Either source is acceptable; it indicates which volfile server was used
Disk Usage (df -h /data):
  • OK if size ≈ 7.0T and usage matches n02 (~3.2–3.3T)
  • Verifies the filesystem is accessible and shows expected capacity
Heal Status (gluster volume heal gv0 info):
  • OK if Number of entries: 0 for all bricks
  • Indicates no pending heal operations (data is consistent)
Peer Status (gluster peer status):
  • OK if both peers are connected
  • Shows n01.modms and n02.modms are in the cluster

Step 3: If Mount Still Fails

If Step 1 doesn’t resolve the issue, run these commands in order and retry Step 1:
1

Restart Gluster Service

systemctl restart glusterd
Restarts the GlusterFS daemon service.
2

Check Peer Status

gluster peer status
Verify both peers are connected after restart.
3

Check Volume Status

gluster volume status gv0
Ensure the volume is started and all bricks are online.
4

Retry Mount

Retry Step 1:
cd / && umount -l /data || true; mount /data

Optional: Force Mount

If the standard mount still fails, you can force mount using a specific peer:
mount -t glusterfs n02.modms:/gv0 /data
df -h
Seeing n02.modms:/gv0 in the mount output is normal if you mounted via that peer. It only indicates the volfile server used. I/O still goes to all bricks in the replicated volume.
What this does:
  • Forces mounting using n02.modms as the volfile server
  • Bypasses fstab configuration
  • Useful when primary server (n01.modms) is unavailable

Troubleshooting Checklist

If systemctl status glusterd.service shows the service as inactive:
systemctl start glusterd.service
systemctl enable glusterd.service
Check if the GlusterFS volume is started:
gluster volume status gv0
If the volume is stopped, start it:
gluster volume start gv0
Ensure all GlusterFS nodes can communicate:
ping n01.modms
ping n02.modms
Check firewall rules to ensure GlusterFS ports are open (default: 24007, 24008, and dynamic ports).
If peers show as disconnected:
gluster peer probe n01.modms
gluster peer probe n02.modms
Then verify:
gluster peer status

Understanding the Mount Process

Volfile Server vs. Data Bricks:
  • The volfile server (n01.modms or n02.modms) provides configuration
  • Actual data I/O goes to all bricks in the replicated volume
  • Mounting via either peer is functionally equivalent
Fstab Configuration:
  • backupvolfile-server=n02.modms provides automatic failover
  • If n01.modms is unavailable, it automatically tries n02.modms
  • _netdev ensures network is ready before mounting