/data is not accessible, follow this comprehensive recovery process.
Step 1: Recover
The primary recovery step that uses your fstab configuration:- Changes to root directory (
cd /) - Lazy unmounts
/dataif mounted (umount -l /data || true) - Mounts
/datausing your fstab configuration (mount /data)
- Uses
n01.modmsas the primary volfile server - Falls back to
n02.modmsif the primary is unavailable - Sets
_netdevto ensure network is available before mounting
Step 2: Verify
After recovery, verify the mountpoint is working correctly:Verification Criteria
Mount Source (findmnt -T /data):
- OK if SOURCE shows
n01.modms:/gv0orn02.modms:/gv0 - Either source is acceptable; it indicates which volfile server was used
df -h /data):
- OK if size ≈ 7.0T and usage matches n02 (~3.2–3.3T)
- Verifies the filesystem is accessible and shows expected capacity
gluster volume heal gv0 info):
- OK if
Number of entries: 0for all bricks - Indicates no pending heal operations (data is consistent)
gluster peer status):
- OK if both peers are connected
- Shows
n01.modmsandn02.modmsare in the cluster
Step 3: If Mount Still Fails
If Step 1 doesn’t resolve the issue, run these commands in order and retry Step 1:1
Restart Gluster Service
2
Check Peer Status
3
Check Volume Status
4
Retry Mount
Retry Step 1:
Optional: Force Mount
If the standard mount still fails, you can force mount using a specific peer:Seeing
n02.modms:/gv0 in the mount output is normal if you mounted via that peer. It only indicates the volfile server used. I/O still goes to all bricks in the replicated volume.- Forces mounting using
n02.modmsas the volfile server - Bypasses fstab configuration
- Useful when primary server (
n01.modms) is unavailable
Troubleshooting Checklist
Service not running
Service not running
If
systemctl status glusterd.service shows the service as inactive:Volume not started
Volume not started
Check if the GlusterFS volume is started:If the volume is stopped, start it:
Network connectivity issues
Network connectivity issues
Ensure all GlusterFS nodes can communicate:Check firewall rules to ensure GlusterFS ports are open (default: 24007, 24008, and dynamic ports).
Peer disconnected
Peer disconnected
If peers show as disconnected:Then verify:
Understanding the Mount Process
Volfile Server vs. Data Bricks:- The volfile server (
n01.modmsorn02.modms) provides configuration - Actual data I/O goes to all bricks in the replicated volume
- Mounting via either peer is functionally equivalent
backupvolfile-server=n02.modmsprovides automatic failover- If
n01.modmsis unavailable, it automatically triesn02.modms _netdevensures network is ready before mounting

