[Ocfs2-users] Can't delete LV snapshot after mounting

Sunil Mushran sunil.mushran at oracle.com
Fri Mar 19 09:31:36 PDT 2010


ok so lvm, md are local to a node.

It should be stopping on umount. If not, try to shut it manually.

To see the number of refs, do:
# ocfs2_hb_ctl -I -d /dev/sdX o2cb

To decrement a ref, do:
# ocfs2_hb_ctl -K -d /dev/sdX o2cb

The fs decrements using UUID.
# ocfs2_hb_ctl -K -u UUID o2cb

To find the uuid, do:
tunefs.ocfs2 -Q "%U\n" /dev/sdX

Armin Wied wrote:
> Dear Sunil
>
> Thank's a lot for your reply.
>
> I think I maybe wasn't clear enough on one point and you got me wrong.
> I don't have LVM on top of DRBD but the other way round:
> OCFS2 -> DRBD -> LVM -> MD-RAID -> physical disk
> Same on both machines, so do you still think, that's queasy?
>
> I have used iSCSI already for some other setup, but wanted to use just 2
> single PCs for this setup to have a cluster system with no single point of
> failure, so I don't see any advantage of using iSCSI in this scenario.
>
> You are right with your guess reagrding the o2hb thread. There is one after
> the system is rebootet and a second one is started, as soon as I mount the
> snapshot. After unmounting, both are still there, so that appears to be the
> reason, why I can't remove the snapshot.
>
> Is there any (safe) way get rid of it after unmounting?
>
> Thank's
> Armin
>
>
>
>   
>> -----Original Message-----
>> From: Sunil Mushran [mailto:sunil.mushran at oracle.com]
>> Sent: Thursday, March 18, 2010 6:28 PM
>> To: Armin Wied
>> Cc: ocfs2-users at oss.oracle.com
>> Subject: Re: [Ocfs2-users] Can't delete LV snapshot after mounting
>>
>>
>> I am queasy recommending such a setup to anyone. It is one thing to handle
>> a workload. The problem is about handling user/admin errors. You are
>> essentially
>> running a local volume manager that is unaware of the other node. Any
>> reconfig
>> that is not coordinated will lead to corruption. Below that you have
>> drbd which
>> a fine block device replication solution. But I would personally choose
>> iscsi
>> which is a excellent lowcost shared device. Also, does not limit you to
>> 2 nodes.
>> The iscsi target in sles is known to be good. Why not just use that and
>> use drbd
>> to replicate the device (like emc srdf) for still higher availability.
>>
>> Having said that, the 2 you are seeing is not because of the number of
>> nodes but
>> because of the hb thread. umount is supposed to stop that hb thread.
>> Maybe that is
>> not happening.
>>
>> # ps aux | grep o2hb
>> You should see one when the volume is mounted and not when umounted.
>>
>> Sunil
>>
>> Armin Wied wrote:
>>     
>>> Hello group!
>>>
>>> I'm pretty new to ocfs2 and clustered file systems in general.
>>> I was able to set up a 2 node cluster (CentOS 5.4) with ocfs2
>>>       
>> 1.4.4 on DRBD
>>     
>>> on top of a LVM volume.
>>>
>>> Everything works like a charm and is rock solid, even under heavy load
>>> conditions, so I'm really happy with it.
>>>
>>> However, there remains one little problem: I'd like to do backups with
>>> snapshots. Creating the snapshot volume, mounting, copying and
>>>       
>> dismounting
>>     
>>> works like expected. But I can't delete the snapshop volume after it was
>>> mounted once.
>>>
>>> What I do is:
>>>
>>> lvcreate -L5G -s -n lv00snap /dev/vg00/lv00
>>> tunefs.ocfs2 -y --cloned-volume /dev/vg00/lv00snap
>>> mount -t ocfs2 /dev/vg00/lv00snap /mnt/backup
>>>
>>> (copy stuff)
>>>
>>> umount /mnt/backup
>>> lvremove -f /dev/vg00/lv00snap
>>>
>>> lvremove fails, saying, that the volume is open. Checking with
>>>       
>> lvdisplay it
>>     
>>> tells me "# open" is 1.
>>> And that's the funny thing: After creating the snapshot volume,
>>>       
>> # open is 0,
>>     
>>> what's not a surprise. After mounting the volume, # open is 2 -
>>>       
>> which is the
>>     
>>> same for the other ocfs2 volume and makes sense to me, as there
>>>       
>> are 2 nodes.
>>     
>>> But after unmounting the snapshot volume, the number decreases
>>>       
>> to 1, not to
>>     
>>> 0 so LVM consideres the volume still open.
>>>
>>> I also tried mounting read only and/or adding "--fs-features=local" to
>>> tunefs.ocfs2 without success. In the moment I have to reboot
>>>       
>> the node to be
>>     
>>> able to remove the snapshot.
>>>
>>> So what am I doing wrong?
>>>
>>> Thank's a lot for any hint!
>>>
>>> Armin
>>>       
>
>
>   




More information about the Ocfs2-users mailing list