[Ocfs2-users] 答复: ceph osd mounting issue with ocfs2 file system

gjprabu gjprabu at zohocorp.com
Tue Sep 1 04:23:13 PDT 2015


Hi Team,



         We are going to use ceph with ocfs2 in production, here my doubt is 12 clients performance and throughput with 1Gig is enough or need to change network level.



Regards

Prabu




 ---- On Thu, 30 Jul 2015 20:04:54 +0530 gjprabu <gjprabu at zohocorp.com> wrote ----




Thanks guozhonghua. Its working.



Regards

Prabu










---- On Thu, 30 Jul 2015 16:29:18 +0530 Guozhonghua <guozhonghua at h3c.com> wrote ----




Hi, 



The number of the node should begin with 1 in the  cluster.conf, you can try it. 

I always start with 1. 



You should check the  directory: ls -al /sys/kernel/config/cluster/pool/node ; and all the three node should have same node directory information on three nodes. 



Otherwise, umount all the fs, copy one /etc/ocfs2/cluster.conf to other two node. 

service o2cb offline/unload ;  service o2cb online ;

Remount the disk again and check the nodes info again. 



The cluter.conf file should be same on everyone node. 



发件人:ocfs2-users-bounces at oss.oracle.com [mailto:ocfs2-users-bounces at oss.oracle.com] 代表 gjprabu
发送时间: 2015年7月30日 17:57
收件人:ocfs2-users at oss.oracle.com
主题: [Ocfs2-users] ceph osd mounting issue with ocfs2 file system




Hi All,





   We are using ceph with two OSD and three clients. Clients try to mount with OCFS2 file system. Here when i start mounting only two clients i can able to mount properly and third client giving below errors. Some time i can able to mount third client but data not sync to third client.








mount /dev/rbd/rbd/integdownloads /soho/build/downloads





mount.ocfs2: Invalid argument while mounting /dev/rbd0 on /soho/build/downloads. Check 'dmesg' for more information on this error.





dmesg





[1280548.676688] (mount.ocfs2,1807,4):dlm_send_nodeinfo:1294 ERROR: node mismatch -22, node 0


[1280548.676766] (mount.ocfs2,1807,4):dlm_try_to_join_domain:1681 ERROR: status = -22


[1280548.677278] (mount.ocfs2,1807,8):dlm_join_domain:1950 ERROR: status = -22


[1280548.677443] (mount.ocfs2,1807,8):dlm_register_domain:2210 ERROR: status = -22


[1280548.677541] (mount.ocfs2,1807,8):o2cb_cluster_connect:368 ERROR: status = -22


[1280548.677602] (mount.ocfs2,1807,8):ocfs2_dlm_init:2988 ERROR: status = -22


[1280548.677703] (mount.ocfs2,1807,8):ocfs2_mount_volume:1864 ERROR: status = -22


[1280548.677800] ocfs2: Unmounting device (252,0) on (node 0)


[1280548.677808] (mount.ocfs2,1807,8):ocfs2_fill_super:1238 ERROR: status = -22











OCFS2 configuration





cluster:


       node_count=3


       heartbeat_mode = local


       name=ocfs2





node:


        ip_port = 7777


        ip_address = 192.168.112.192


        number = 0


        name = integ-hm5


        cluster = ocfs2


node:


        ip_port = 7777


        ip_address = 192.168.113.42


        number = 1


        name = integ-soho


        cluster = ocfs2


node:


        ip_port = 7778


        ip_address = 192.168.112.115


        number = 2


        name = integ-hm2


        cluster = ocfs2





Version :-  o2cb 1.8.0





OS : - Centos 7 64 bit (Kernel 3.18.16)








Regards


Prabu GJ














-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments contain confidential information from H3C, which is 
 intended only for the person or entity whose address is listed above. Any use of the 
 information contained herein in any way (including, but not limited to, total or partial 
 disclosure, reproduction, or dissemination) by persons other than the intended 
 recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender 
 by phone or email immediately and delete it!





_______________________________________________ 

Ocfs2-users mailing list 

Ocfs2-users at oss.oracle.com 

https://oss.oracle.com/mailman/listinfo/ocfs2-users






-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20150901/6aa02a9e/attachment.html 


More information about the Ocfs2-users mailing list