[Ocfs2-test-devel] [PATCH 1/1] Ocfs2-tools: Fix a bug to let 'dump' of debugfs.ocfs2 correctly handle LARGEFILE.
Tao Ma
tao.ma at oracle.com
Mon Feb 23 01:10:57 PST 2009
Thank you, tristan.
btw, you should send it to ocfs2-tools-devel at oss.oracle.com next time.
Signed-off-by: Tao Ma <tao.ma at oracle.com>
Tristan Ye wrote:
> We always faild to dump a LARGEFILE(more than 2G) on i386 arch with debugfs.ocfs2,
> by striking at the last byte of 2G.
>
> That's simply due to the 'off_t' data type of i386 machine is 32 bits, its largest
> supported value therefore is (2G - 1), that's exactly why we always failed at the
> writing of last byte....
>
> To fix such issue on i386 arch, we need to turn O_LARGEFILE flag on when opening
> target files.
>
> Signed-off-by: Tristan Ye <tristan.ye at oracle.com>
> ---
> debugfs.ocfs2/commands.c | 2 +-
> debugfs.ocfs2/utils.c | 3 ++-
> 2 files changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/debugfs.ocfs2/commands.c b/debugfs.ocfs2/commands.c
> index 7f2e4bc..5758c89 100644
> --- a/debugfs.ocfs2/commands.c
> +++ b/debugfs.ocfs2/commands.c
> @@ -1129,7 +1129,7 @@ static void do_dump (char **args)
> return ;
> }
>
> - fd = open(out_fn, O_CREAT | O_WRONLY | O_TRUNC, 0666);
> + fd = open(out_fn, O_CREAT | O_WRONLY | O_TRUNC | O_LARGEFILE, 0666);
> if (fd < 0) {
> com_err(args[0], errno, "'%s'", out_fn);
> return ;
> diff --git a/debugfs.ocfs2/utils.c b/debugfs.ocfs2/utils.c
> index 3a876d4..1075745 100644
> --- a/debugfs.ocfs2/utils.c
> +++ b/debugfs.ocfs2/utils.c
> @@ -725,7 +725,8 @@ errcode_t rdump_inode(ocfs2_filesys *fs, uint64_t blkno, const char *name,
> } else if (S_ISREG(di->i_mode)) {
> if (verbose)
> fprintf(stdout, "%s\n", fullname);
> - fd = open(fullname, O_WRONLY | O_CREAT | O_TRUNC, S_IRWXU);
> + fd = open(fullname, O_WRONLY | O_CREAT | O_TRUNC | O_LARGEFILE,
> + S_IRWXU);
> if (fd == -1) {
> com_err(gbls.cmd, errno, "while opening file %s",
> fullname);
More information about the Ocfs2-test-devel
mailing list