[Ocfs2-users] ls
Zhen Ren
zren at suse.com
Tue Apr 14 19:28:50 PDT 2015
--
Best regards,
Eric, Ren
HA team, SU
>>>
> leopoldo,
>
> why did you use DRBD to share only one storage?
+1
As far as I seen, ocfs2 typically runs on DRBD with Primary/Secondary mode.
By the way, Primary/Primary seems to be used in cases where KVM/Xen vms be hosted.
>
> >>
> >> ub-ocfs:~# service drbd status
> >> drbd driver loaded OK; device status:
> >> version: 8.3.13 (api:88/proto:86-96)
> >> srcversion: 697DE8B1973B1D8914F04DB
> >> m:res cs ro ds p mounted
> fstype
> >> 1:wwwdata Connected Primary/Primary UpToDate/UpToDate C /opt ocfs2
> >
> > the ds "Primary/Primary" is in question.
> >
> > It seems that DRBD cannot work when both nodes is primary.
> >
> > This doc may help you get some hint.
> >
> https://www.suse.com/documentation/sle_ha/book_sleha/data/sec_ha_drbd_configu
> re.html
> >
> > Also,inproper configuration might lead to this issue.
> >
> >>
> >> ub-ocfs:~# ls /opt/
> >> local lost+found
> >>
> >> ub-ocfs:~# ls /opt/local
> >> Segmentation fault
> >
> > IIRC,I also got seg-fault problem for some unclear reason,which may be
> caused by setting up problem.
> >
> >>
> >> ub-ocfs:~# ps fa
> >> PID TTY STAT TIME COMMAND
> >> 2472 pts/2 Ss 0:00 -bash
> >> 2529 pts/2 R+ 0:00 \_ ps fa
> >> 2299 pts/1 Ss 0:00 -bash
> >> 2356 pts/1 S+ 0:00 \_ -bash
> >> 2361 pts/1 D+ 0:00 \_ ls -l /opt/local/etc /opt/local/games
> >> /opt/local/include /opt/local/leo1....
> >>
> >>
> >> the script ls -l /opt/local/*
> >
> > You mean ls -l /opt/local go bad, but ls -l /opt/local/* can work well?
> >
> >>
> >> ub-ocfs:~# cat /opt/local/leo-ocfs6.sh
> >> date ; echo -n
> >> hostname
> >> ls -l
> >> ls -l /opt/local/le*
> >> ls -l /opt/local/*
> >>
> >>
> >> I can work normal
> >
> > what do you mean by work normal? ;-)
> > other cmd can work, except ls -l /opt/local, though seg-fault already
> happened?
> >
> >>
> >> ub-ocfs1:~# rm -r /opt/local/bin
> >> ub-ocfs1:~# mkdir /opt/local/etc/bin
> >> ub-ocfs1:~# ls /opt/local/leo*
> >> /opt/local/leo1 /opt/local/leo3 /opt/local/leo-ocfs
> /opt/local/leo-ocfs2
> >> /opt/local/leo-ocfs4 /opt/local/leo-ocfs6
> >> /opt/local/leo2 /opt/local/leo4 /opt/local/leo-ocfs1
> /opt/local/leo-ocfs3
> >> /opt/local/leo-ocfs5 /opt/local/leo-ocfs6.sh
> >>
> >>
> >> but resource is busy if I want restart or unload
> >>
> >
> > don't worry about this.
> > when the previous problem done, this would go away.
> >
> >> ocfs:~# service o2cb unload
> >> Stopping ocfs2_controld.cman: /sbin/start-stop-daemon: warning: this system
> is
> >> not able to track process names
> >> longer than 15 characters, please use --exec instead of --name.
> >> Failed
> >> Unable to unload modules as the cluster is still online
> >>
> >>
> >> because '2361 pts/1 D+ 0:00 \_ ls -l /opt/local/' is in 'D
> >> uninterruptible sleep (usually IO)'
> >>
> >> if I want the control back I need reboot host,
> >>
> >> anybody have another solution ?-:-(
> >
> > hm,what's your solution? ;-)
> >
>
>
More information about the Ocfs2-users
mailing list