I retweeted the other day:

99 little bugs in the code
99 little bugs in the code
Take one down, patch it around
117 little bugs in the code

Looks like it happened again. In an effort to protect you from having self- heal fill up your root partition should your brick fail to mount, a new feature of 3.4.0 is that you no longer can replace a failed hard drive. It turns out that the posix translator checks to see if trusted.glusterfs.volume- id exists and is the right id. If it’s not, it happily rejects your brick and dies with this error:

E [posix.c:4288:init] 0-{volume}-posix: Extended attribute trusted.glusterfs.volume-id is absent

There’s no cli command to allow that replacement (unless you use “replace- brick…commit force” to somewhere else).

The work-around is to add the volume-id to the new brick:

setfattr -n trusted.glusterfs.volume-id \
  -v 0x$(grep volume-id /var/lib/glusterd/vols/$vol/info \
  | cut -d= -f2 | sed 's/-//g') $brick

A bug has been filed.