[Ocfs2-users] RBD with OCFS2

Srinivas Eeda srinivas.eeda at oracle.com
Thu Sep 24 09:02:24 PDT 2015



On 09/24/2015 04:27 AM, gjprabu wrote:
>
>     Hi All,
>
>        Can someone tell me what kind of error and issue is this.
>
>     Regards
>     Prabu GJ
>
>     ---- On Wed, 23 Sep 2015 18:26:13 +0530 *gjprabu
>     <gjprabu at zohocorp.com <mailto:gjprabu at zohocorp.com>>* wrote ----
>
>         Hi All,
>
>               This issue we faced in locally machine also. but it is
>         not in all the client only two ocfs2 client we facing this issue.
>
>         Regards
>         Prabu GJ
>
>
>
>         ---- On Wed, 23 Sep 2015 17:49:51 +0530 *gjprabu
>         <gjprabu at zohocorp.com <mailto:gjprabu at zohocorp.com>>* wrote ----
>
>
>
>             Hi All,
>
>                        We are using ocfs2 for RBD mounting and
>             everything works fine, but while writing or moving the
>             data via the scripts after written it shows below error.
>             Please anybody help on this issue.
>
>
>
>             # ls -althr
>             ls: cannot access MICKEYLITE_3_0_M4_1_TEST: Input/output error
>             ls: cannot access MICKEYLITE_3_0_M4_1_OLD: Input/output error
>
EIO error seems to be coming from your RBD device. Check messages file 
for any errors and resolve them

>             total 0
>             d?????????  ? ?     ?        ?  ? MICKEYLITE_3_0_M4_1_TEST
>             d?????????  ? ?     ?        ?  ? MICKEYLITE_3_0_M4_1_OLD
>
>             _*Partition details.*_
>
>             /dev/rbd0            ocfs2     9.6T  140G  9.5T   2%
>             /zoho/build/downloads
>
>             /etc/ocfs2/cluster.conf
>             cluster:
>                    node_count=7
>                    heartbeat_mode = local
>                    name=ocfs2
>
>             node:
>                     ip_port = 7777
>                     ip_address = 10.1.1.50
>                     number = 1
>                     name = integ-hm5
>                     cluster = ocfs2
>
>             node:
>                     ip_port = 7777
>                     ip_address = 10.1.1.51
>                     number = 2
>                     name = integ-hm9
>                     cluster = ocfs2
>
>             node:
>                     ip_port = 7777
>                     ip_address = 10.1.1.52
>                     number = 3
>                     name = integ-hm2
>                     cluster = ocfs2
>
>             node:
>                     ip_port = 7777
>                     ip_address = 10.1.1.53
>                     number = 4
>                     name = integ-ci-1
>                     cluster = ocfs2
>             node:
>                     ip_port = 7777
>                     ip_address = 10.1.1.54
>                     number = 5
>                     name = integ-cm2
>                     cluster = ocfs2
>             node:
>                     ip_port = 7777
>                     ip_address = 10.1.1.55
>                     number = 6
>                     name = integ-cm1
>                     cluster = ocfs2
>             node:
>                     ip_port = 7777
>                     ip_address = 10.1.1.56
>                     number = 7
>                     name = integ-hm8
>                     cluster = ocfs2
>
>
>             *_Error on dmesg_*
>
>
>             [516421.342393] (dlm_thread,51005,25):dlm_flush_asts:599
>             ERROR: status = -112
>             [517119.689992]
>             (httpd,64399,31):dlm_do_master_request:1383 ERROR: link to
>             1 went down!
>             [517119.690003]
>             (dlm_thread,51005,25):dlm_send_proxy_ast_msg:482 ERROR:
>             A895BC216BE641A8A7E20AA89D57E051: res
>             S000000000000000000000200000000, error -112 send AST to node 1
>             [517119.690028] (dlm_thread,51005,25):dlm_flush_asts:599
>             ERROR: status = -112
>             [517119.690034]
>             (dlm_thread,51005,25):dlm_send_proxy_ast_msg:482 ERROR:
>             A895BC216BE641A8A7E20AA89D57E051: res
>             S000000000000000000000200000000, error -107 send AST to node 1
>             [517119.690036] (dlm_thread,51005,25):dlm_flush_asts:599
>             ERROR: status = -107
>             [517119.700425] (httpd,64399,31):dlm_get_lock_resource:968
>             ERROR: status = -112
>             [517517.894949]
>             (dlm_thread,51005,25):dlm_send_proxy_ast_msg:482 ERROR:
>             A895BC216BE641A8A7E20AA89D57E051: res
>             S000000000000000000000200000000, error -112 send AST to node 1
>             [517517.899640] (dlm_thread,51005,25):dlm_flush_asts:599
>             ERROR: status = -112
>
>
>             Regards
>             Prabu GJ
>
>
>
>             _______________________________________________
>             Ocfs2-users mailing list
>             Ocfs2-users at oss.oracle.com
>             <mailto:Ocfs2-users at oss.oracle.com>
>             https://oss.oracle.com/mailman/listinfo/ocfs2-users
>
>
>     _______________________________________________
>     Ocfs2-users mailing list
>     Ocfs2-users at oss.oracle.com <mailto:Ocfs2-users at oss.oracle.com>
>     https://oss.oracle.com/mailman/listinfo/ocfs2-users
>
>
>
>
> _______________________________________________
> Ocfs2-users mailing list
> Ocfs2-users at oss.oracle.com
> https://oss.oracle.com/mailman/listinfo/ocfs2-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20150924/8bc78f45/attachment-0001.html 


More information about the Ocfs2-users mailing list