[Ocfs2-users] Recreated FS - getting "no free slots available"
message when trying to mount
Sunil Mushran
Sunil.Mushran at oracle.com
Tue Aug 21 23:40:25 PDT 2007
Do:
# debugfs.ocfs2 -R "slotmap" /dev/sdX
slotmap maintains the slot-to-nodenum mappings.
If the above shows that not all slots are in use, then we'll have to do
some tracing.
# debugfs.ocfs2 -l SUPER ENTRY EXIT allow
# mount -t ocfs2 /dev/sde1 /mntpnt
# debugfs.ocfs2 -l SUPER off ENTRY EXIT deny
Preferably file a bugzilla with the traces (var/log/messages) and also
the information you have already posted.
On Tue, Aug 21, 2007 at 06:49:30AM -0400, Richard Bollinger wrote:
> On 8/21/07, Sunil Mushran <Sunil.Mushran at oracle.com> wrote:
> > Do:
> > # debugfs.ocfs2 -R "stats" /dev/sdX
> > Ensure the Number of Slots is what it is supposed to be.
> # debugfs.ocfs2 -R "stats" /dev/sde1
> Revision: 0.90
> Mount Count: 0 Max Mount Count: 20
> State: 0 Errors: 0
> Check Interval: 0 Last Check: Mon Aug 20 15:42:45 2007
> Creator OS: 0
> Feature Compat: 1 BackupSuper
> Feature Incompat: 0 None
> Feature RO compat: 0 None
> Root Blknum: 33 System Dir Blknum: 34
> First Cluster Group Blknum: 16
> Block Size Bits: 12 Cluster Size Bits: 16
> Max Node Slots: 6
> Label: je111
> UUID: 8E7583C03D1E4D5DA87F76A1D9B8F4B7
> Inode: 2 Mode: 00 Generation: 1855991134 (0x6ea02d5e)
> FS Generation: 1855991134 (0x6ea02d5e)
> Type: Unknown Attr: 0x0 Flags: Valid System Superblock
> User: 0 (root) Group: 0 (root) Size: 0
> Links: 0 Clusters: 28674016
> ctime: 0x46c9eeb5 -- Mon Aug 20 15:42:45 2007
> atime: 0x0 -- Wed Dec 31 19:00:00 1969
> mtime: 0x46c9eeb5 -- Mon Aug 20 15:42:45 2007
> dtime: 0x0 -- Wed Dec 31 19:00:00 1969
> ctime_nsec: 0x00000000 -- 0
> atime_nsec: 0x00000000 -- 0
> mtime_nsec: 0x00000000 -- 0
> Last Extblk: 0
> Sub Alloc Slot: Global Sub Alloc Bit: 65535
> > Also, do:
> > # debugfs.ocfs2 -R "ls -l //" /dev/sdX
> > Ensure you see the journals for all the above slots.
> # debugfs.ocfs2 -R "ls -l //" /dev/sde1
> 34 drwxr-xr-x 8 0 0 4096
> 20-Aug-2007 15:42 .
> 34 drwxr-xr-x 8 0 0 4096
> 20-Aug-2007 15:42 ..
> 35 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 bad_blocks
> 36 -rw-r--r-- 1 0 0 851968
> 20-Aug-2007 15:42 global_inode_alloc
> 37 -rw-r--r-- 1 0 0 65536
> 20-Aug-2007 15:42 slot_map
> 38 -rw-r--r-- 1 0 0 1048576
> 20-Aug-2007 15:42 heartbeat
> 39 -rw-r--r-- 1 0 0 1879180312576
> 20-Aug-2007 15:42 global_bitmap
> 40 drwxr-xr-x 2 0 0 4096
> 20-Aug-2007 15:42 orphan_dir:0000
> 41 drwxr-xr-x 2 0 0 4096
> 20-Aug-2007 15:42 orphan_dir:0001
> 42 drwxr-xr-x 2 0 0 4096
> 20-Aug-2007 15:42 orphan_dir:0002
> 43 drwxr-xr-x 2 0 0 4096
> 20-Aug-2007 15:42 orphan_dir:0003
> 44 drwxr-xr-x 2 0 0 4096
> 20-Aug-2007 15:42 orphan_dir:0004
> 45 drwxr-xr-x 2 0 0 4096
> 20-Aug-2007 15:42 orphan_dir:0005
> 46 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 extent_alloc:0000
> 47 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 extent_alloc:0001
> 48 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 extent_alloc:0002
> 49 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 extent_alloc:0003
> 50 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 extent_alloc:0004
> 51 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 extent_alloc:0005
> 52 -rw-r--r-- 1 0 0 4194304
> 20-Aug-2007 15:42 inode_alloc:0000
> 53 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 inode_alloc:0001
> 54 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 inode_alloc:0002
> 55 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 inode_alloc:0003
> 56 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 inode_alloc:0004
> 57 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 inode_alloc:0005
> 58 -rw-r--r-- 1 0 0 268435456
> 20-Aug-2007 15:43 journal:0000
> 59 -rw-r--r-- 1 0 0 268435456
> 20-Aug-2007 15:43 journal:0001
> 60 -rw-r--r-- 1 0 0 268435456
> 20-Aug-2007 15:43 journal:0002
> 61 -rw-r--r-- 1 0 0 268435456
> 20-Aug-2007 15:43 journal:0003
> 62 -rw-r--r-- 1 0 0 268435456
> 20-Aug-2007 15:43 journal:0004
> 63 -rw-r--r-- 1 0 0 268435456
> 20-Aug-2007 15:43 journal:0005
> 64 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 local_alloc:0000
> 65 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 local_alloc:0001
> 66 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 local_alloc:0002
> 67 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 local_alloc:0003
> 68 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 local_alloc:0004
> 69 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 local_alloc:0005
> 70 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 truncate_log:0000
> 71 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 truncate_log:0001
> 72 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 truncate_log:0002
> 73 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 truncate_log:0003
> 74 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 truncate_log:0004
> 75 -rw-r--r-- 1 0 0 0
> 20-Aug-2007 15:42 truncate_log:0005
>
> _______________________________________________
> Ocfs2-users mailing list
> Ocfs2-users at oss.oracle.com
> http://oss.oracle.com/mailman/listinfo/ocfs2-users
More information about the Ocfs2-users
mailing list