[Ocfs2-users] ocfs volumes corrupt?

Sunil Mushran Sunil.Mushran at oracle.com
Wed Dec 19 15:35:14 PST 2007


mounted.ocfs2 -f dirty reads slot_map to show the information.
The only time this information is incorrect if the last node that
had this volume mounted, crashed. slot_map information is
recovered only when that volume is mounted again.

Do:
$ watch -d -n2 "debugfs.ocfs2 -R \"hb\" /dev/sde1"

This will show if any node is still heartbeating. There are instances
when a node crashes but is still heartbeating. See if that is the case.
If so, reset that box.

Lastly, to run fsck, do fsck.ocfs2 -f /dev/sde1.

Kendall, Kim wrote:
>
> I have a 4 node RAC cluster that had all 4 nodes crash. The systems 
> locked up.
>
>  
>
> Everytime I tried to bring up a box, it hung. So I commented out all 
> the ocfs2 volumes from the fstab file and the system booted fine.
>
>  
>
> I have 3 volumes that were in use at the time and 3 volumes that have 
> were unmounted. After booting, I see even though no volumes are 
> mounted (commented out of the fstab), I can see that it THINKS they are!
>
>  
>
> #mounted.ocfs2 -f
>
> Device                FS     Nodes
>
> /dev/sde1             ocfs2  appsdb1
>
> /dev/sdf1             ocfs2  appsdb3
>
> /dev/sdg1             ocfs2  appsdb3
>
> /dev/sdh1             ocfs2  Not mounted
>
> /dev/sdi1             ocfs2  Not mounted
>
> /dev/sdj1             ocfs2  Not mounted
>
>  
>
> I set o2cb and ocfs2 NOT to start on boot up on all 4 nodes and 
> rebooted. After the reboot it looks like the volumes think they are 
> still in use by node 0 and 2.
>
>  
>
> #mounted.ocfs2 -f
>
> Device                FS     Nodes
>
> /dev/sde1             ocfs2  0
>
> /dev/sdf1             ocfs2  2
>
> /dev/sdg1             ocfs2  2
>
> /dev/sdh1             ocfs2  Not mounted
>
> /dev/sdi1             ocfs2  Not mounted
>
> /dev/sdj1             ocfs2  Not mounted
>
>  
>
> I started o2cb and fsck.ocfs2 one of the volumes
>
>  
>
> #fsck.ocfs2 /dev/sde1
>
> Checking OCFS2 filesystem in /dev/sde1:
>
>   label:              st_dbf1
>
>   uuid:               08 dd 84 44 da 6e 4e 7e 80 a0 82 2f fd 80 de c0
>
>   number of blocks:   131070208
>
>   bytes per block:    4096
>
>   number of clusters: 511993
>
>   bytes per cluster:  1048576
>
>   max slots:          4
>
>  
>
> /dev/sde1 is clean.  It will be checked after 20 additional mounts.
>
>  
>
>  
>
> Where should I be looking here?
>
>  
>
> TIA
>
>  
>
>  
>
> The information contained in this E-mail may be confidential and/or 
> proprietary to Inter-Tel and/or its affiliates. The information 
> transmitted herewith is intended only for use by the individual or 
> entity to which it is addressed. If the reader of this message is not 
> the intended recipient, you are hereby notified that any review, 
> retransmission, dissemination, distribution, copying or other use of, 
> or taking of any action in reliance upon this information is strictly 
> prohibited. If you have received this communication in error, please 
> contact the sender and delete the material from your computer.
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> Ocfs2-users mailing list
> Ocfs2-users at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs2-users




More information about the Ocfs2-users mailing list