[Ocfs2-users] Another node is heartbeating in ourslot! errorswith LUN removal/addition
Brian Kroth
bpkroth at gmail.com
Sun Dec 21 14:46:46 PST 2008
Thanks for the info. I've just now finally gotten around to trying
this. Here's my setup:
A three node cluster and a separate backup node. The SAN is configured
to give the cluster access to just the live volume and the backup node
access to just the snapshots. The backup node is also given a different
OCFS2 cluster.conf file so that it doesn't try to communicate with the
other nodes. The kernel is based off of Debian's 2.6.26 kernel. FS
version is listed as 1.5.0 and the tools are 1.4.1.
1) Snapshot the volume on the SAN (EqualLogic).
2) Replay the journal
# fsck.ocfs2 -y $snapshot_dev
3) Clear the dirty bits
mount $snapshot_dev /snapshots/$snapshot_datetime
sleep 300 # let the fs sit for a bit
umount $snapshot_dev
4) Change the fs uuid, label, and type
yes | tunefs.ocfs2 -U -M local -L $snapshot_datetime $snapshot_dev
5) On the SAN change the snapshot to readonly.
Now, if I try to mount that snapshot with "-o ro", the volume mounts,
but results in strange behavior and an angry kernel. Some example
output is below.
If I leave off step 5 and mount it ro, everything seems to be ok, but
I'd rather have the SAN enforce readonly access. Is there anything to
be done about this?
Thanks,
Brian
# ls -la /snapshots/$snapshot_datetime/
ls: cannot access lost+found: Read-only file system
ls: cannot access mail: Read-only file system
ls: cannot access .shareddiskfs-ha_monitor-only.mercury: Read-only file
system
ls: cannot access .shareddiskfs-ha_monitor-only.iris: Read-only file
system
ls: cannot access .shareddiskfs-ha_monitor-only.hermes: Read-only file
system
ls: cannot access tmp: Read-only file system
total 8
drwxr-xr-x 5 root root 4096 2008-12-11 10:29 ./
drwxr-xr-x 3 root root 4096 2008-12-21 14:07 ../
d????????? ? ? ? ? ? lost+found/
d????????? ? ? ? ? ? mail/
-????????? ? ? ? ? ?
.shareddiskfs-ha_monitor-only.hermes
-????????? ? ? ? ? ?
.shareddiskfs-ha_monitor-only.iris
-????????? ? ? ? ? ?
.shareddiskfs-ha_monitor-only.mercury
d????????? ? ? ? ? ? tmp/
Some output from dmesg:
[ 747.120562] sd 1:0:0:0: [sdb] Attached SCSI disk
[ 796.331467] OCFS2 1.5.0
[ 796.381325] Readonly device detected. No cluster services will be
utilized for this mount. Recovery will be skipped.
[ 796.381325] ocfs2: Mounting device (8,17) on (node local, slot -1)
with ordered data mode.
[ 814.890455] (2720,0):ocfs2_dentry_attach_lock:290 ERROR: status = -30
[ 814.890455] (2720,0):ocfs2_lookup:168 ERROR: status = -30
[ 814.940692] (2720,0):ocfs2_dentry_attach_lock:290 ERROR: status = -30
[ 814.940745] (2720,0):ocfs2_lookup:168 ERROR: status = -30
[ 814.951784] (2720,0):ocfs2_dentry_attach_lock:290 ERROR: status = -30
[ 814.951784] (2720,0):ocfs2_lookup:168 ERROR: status = -30
[ 814.982553] (2720,0):ocfs2_dentry_attach_lock:290 ERROR: status = -30
[ 814.982610] (2720,0):ocfs2_lookup:168 ERROR: status = -30
[ 814.997255] (2720,0):ocfs2_dentry_attach_lock:290 ERROR: status = -30
[ 814.997308] (2720,0):ocfs2_lookup:168 ERROR: status = -30
[ 815.014511] (2720,0):ocfs2_dentry_attach_lock:290 ERROR: status = -30
[ 815.014575] (2720,0):ocfs2_lookup:168 ERROR: status = -30
[ 818.490805] BUG: unable to handle kernel NULL pointer dereference at
00000018
[ 818.490805] IP: [<e0c98a04>] :ocfs2:ocfs2_statfs+0x1a5/0x2d3
[ 818.490805] *pde = 00000000
[ 818.490805] Oops: 0000 [#1] SMP
[ 818.490805] Modules linked in: ocfs2 crc32c libcrc32c ocfs2_dlmfs
ocfs2_stack_o2cb ocfs2_dlm ocfs2_nodemanager ocfs2_stackglue configfs
ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp
libiscsi scsi_transport_iscsi ipv6 dm_snapshot dm_mirror dm_log dm_mod
usbhid hid ff_memless uhci_hcd ohci_hcd ehci_hcd usbcore snd_pcm
snd_timer snd soundcore snd_page_alloc serio_raw parport_pc i2c_piix4
intel_agp shpchp pci_hotplug button ac container parport pcspkr psmouse
i2c_core agpgart evdev ext3 jbd mbcache sd_mod ide_cd_mod cdrom
ata_generic libata dock floppy pcnet32 mii mptspi mptscsih mptbase
scsi_transport_spi scsi_mod piix ide_pci_generic ide_core thermal
processor fan thermal_sys
[ 818.490805]
[ 818.490805] Pid: 2408, comm: collectd Not tainted
(2.6.26.vmwareguest-smp.081125 #1)
[ 818.490805] EIP: 0060:[<e0c98a04>] EFLAGS: 00010246 CPU: 0
[ 818.490805] EIP is at ocfs2_statfs+0x1a5/0x2d3 [ocfs2]
[ 818.490805] EAX: 00000000 EBX: df2d14b0 ECX: 00000000 EDX: 00004848
[ 818.490805] ESI: 00000000 EDI: 00000000 EBP: dee1e000 ESP: dee1fe68
[ 818.490805] DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
[ 818.490805] Process collectd (pid: 2408, ti=dee1e000 task=df570040
task.ti=dee1e000)
[ 818.490805] Stack: 00000001 b7f6f000 00000000 dee1fea8 dea41000
df2d5790 00000000 dee1fea8
[ 818.490805] df2d14b0 dee1fefc dee1e000 c016f36c 00000000
dee1fea8 dee1ff08 c016f394
[ 818.490805] 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000
[ 818.490805] Call Trace:
[ 818.490805] [<c016f36c>] vfs_statfs+0x43/0x5b
[ 818.490805] [<c016f394>] vfs_statfs64+0x10/0x21
[ 818.490805] [<c016f449>] sys_statfs64+0x44/0x7b
[ 818.490805] [<c0161ac8>] vma_adjust+0x310/0x397
[ 818.490805] [<c01604dc>] free_pgtables+0x84/0x93
[ 818.490805] [<c010edf8>] flush_tlb_mm+0x39/0x5d
[ 818.490805] [<c0161da3>] do_munmap+0x181/0x19b
[ 818.490805] [<c0161df6>] sys_munmap+0x39/0x3e
[ 818.490805] [<c010313f>] sysenter_past_esp+0x78/0xb1
[ 818.490805] =======================
[ 818.490805] Code: 68 40 04 00 00 68 b1 e2 c9 e0 50 ff b3 0c 01 00 00
68 2b 67 ca e0 e8 9a 88 48 df 83 c4 1c e9 21 01 00 00 8b 44 24 18 89 44
24 08 <8b> 40 18 8b 88 bc 00 00 00 89 cf 2b b8 b8 00 00 00 8b 44 24 0c
[ 818.490805] EIP: [<e0c98a04>] ocfs2_statfs+0x1a5/0x2d3 [ocfs2] SS:ESP
0068:dee1fe68
[ 818.490805] ---[ end trace ae323790ea69e92a ]---
Ulf Zimmermann <ulf at openlane.com> 2008-12-05 12:51:
> > -----Original Message-----
> > From: ocfs2-users-bounces at oss.oracle.com [mailto:ocfs2-users-
> > bounces at oss.oracle.com] On Behalf Of Brian Kroth
> > Sent: 12/05/2008 06:11
> > To: Daniel Keisling
> > Cc: ocfs2-users at oss.oracle.com; Joel Becker
> > Subject: Re: [Ocfs2-users] Another node is heartbeating in ourslot!
> > errorswith LUN removal/addition
> >
> > Just for clarity, can you post the proper sequence you're now using to
> > take SAN based snapshots? I'd like to try this on a new cluster I'm
> > setting up.
> >
> > Thanks,
> > Brian
> >
>
> Here is how we do backup and refresh of development databases from our
> production database. The SANs involved in this are 3Par E200 and S400
> using Rcopy and SnapClone.
>
> Production database gets put into backup mode
> Execute either Rcopy refresh or SnapClone Refresh on S400 (Rcopy for
> Dev, SnapClone for Backup)
> Take production database out of backup mode
>
> For backups we continue with:
>
> Running fsck to replay journal
> Mounting and unmounting volume on backup server to clear the dirty flag,
> just running fsck will not do that.
> We reset at this point the UUID and label of the volume to not run into
> issues we want to mount 2 different version of the snapclone
> Running one more time fsck to ensure no errors
> Mount volume
> Recover database via log files
> Clean shutdown of database
> Backup Database in cold state
> Unmount volume
>
> For development database refresh:
>
> Rcopy above refreshed a master volume on E200 SAN
> We shutdown development database X (we got several copies) and unmount
> volume on all nodes in the RAC cluster
> Run SnapClone refresh command on SAN
> Run fsck from one node to replay journal
> Mount and unmount volume on one node to clear dirty flag
> Reset UUID and label
> Run fsck one more time
> Mount volume on all RAC nodes again
> Recover database from logs and modify name to the development name
> At this point after other script run to modify contents in the database
> (email addresses, phone numbers, etc)
> And voila database is ready for use by developers.
>
>
>
> Ulf Zimmermann | Senior System Architect
>
> OPENLANE
> 4600 Bohannon Drive, Suite 100
> Menlo Park, CA 94025
>
> O: 650-532-6382 M: (510) 396-1764 F: (510) 580-0929
>
> Email: ulf at openlane.com | Web: www.openlane.com
More information about the Ocfs2-users
mailing list