[Ocfs2-users] OCFS2 - panic behaviour and cluster status

spgle spgle at wanadoo.fr
Wed Oct 19 12:51:27 CDT 2005


I upgraded to release 1.1.6, using ocfs2-tools revision 1100
from subversion repository.
Still have the same problem, mounted FS :

/dev/sda1 on /mnt/OCFS2 type ocfs2 (rw,_netdev,heartbeat=local)

Context :
ocfs2 : 1.1.6
ocfs2-tools : last subversion revision 1100

Line screened on panic :

(6,0): o2hb_write_timeout:164 ERROR: Hearbeat write timeout to device 
sda1 after 12000 milliseconds
(6,0): o2hb_stop_all_regions:1727 ERROR: stooping hearbeat on all active 
regions.

Kernel panic - not syncing : ocfs2 is verry sorry to be fencing this 
system by panicing.

/dev/sda1 is an iSCSI device.

Sunil Mushran wrote:

> 1.1.2 is fairly old in the 1.1.x stream. Use 1.1.6.
> You are hitting a known bug.
>
> spgle wrote:
>
>> Hi,
>> I have a few questions about OCFS2 administration and features :
>>
>> - I have the following architecture (simple, first step : a one node
>> cluster...):
>>   - a iSCSI server
>>   - a client, using the iSCSI exported device with an OCFS2 filesystem
>>
>>  ocfs2 is version 1.1.2, linux kernel is version 2.6.12.3 .
>>
>>  When I run a benchmark (using iozone) on the client on the mounted
>> ocfs2 fs the kernel
>> panic when reaching large file.
>> =>    "ocfs2 is very sorry to be fencing this system by panicing".
>>
>> In order to end benchmarking i changed the O2CB_HEARTBEAT_THRESHOLD 
>> from 7
>> to O2CB_HEARTBEAT_THRESHOLD=3001, this way the benchmark can finish
>> without problem.
>>
>> Is there a way/a feature to avoid panicing, and instead get I/O errors
>> on system calls
>> using the OCFS2 fs ?
>>
>> How to get information about all nodes of the cluster (status of each 
>> node,
>> quorum, DLM usage)?
>>
>> thanks.
>>
>>
>>
>> _______________________________________________
>> Ocfs2-users mailing list
>> Ocfs2-users at oss.oracle.com
>> http://oss.oracle.com/mailman/listinfo/ocfs2-users
>
>
>
>




More information about the Ocfs2-users mailing list