[Ocfs2-users] OCFS2 benchmark slow concurrent write

Philipp Wehrheim wehrheim at glue.ch
Wed Jun 27 05:11:00 PDT 2007


Hi everybody, hi sunil

I'm still trying to get OCFS2 and DRBD work in an per formant way when
writing to a shared file.
But until now there are still some strange issues.

I did some more benchmarks writing 99 chars (one line) into a file
but the time this takes vary between 40 us and 1s so this is way to
variable for our applications.
The average for writing one line is 175 ms what is ok.

The benchmark was done with different kernel: 

- with and without preemption
- with 100 and 1000Hz clock frequency
- etc

I also mailed to the drbd list and the response was that it is 
probably an dlm issue/dlm = bottleneck.
Further more the recommendation was that I should nether write
concurrently into one file nor into two file in one directory.

Is this right?


bye
Philipp

Am Freitag, den 15.06.2007, 08:32 -0700 schrieb Sunil Mushran:
> It's not the bandwidth but the latency that is the issue.
> Try the same with gige.
> 
> Philipp Wehrheim wrote:
> > Am Donnerstag, den 14.06.2007, 16:08 -0700 schrieb Sunil Mushran:
> >   
> >> Did you try using a gige interconnect?
> >>     
> >
> > No not yet.
> > Do you think that the network is the bottlenack?
> > I tried to follow the used network bandwidth with nettop
> > and the bandwith used was never more than ~4Mbit.
> >
> > Is there a way to compile the ocfs2 module with special small
> > file support (e.q. now I guess its more optimized for big files)?
> > Or can the dlm be tweaked somehow?
> >
> > I'm mounting the partion now with noatime -> but no speed improve at
> > all.
> >
> >   
> >> Philipp Wehrheim wrote:
> >>     
> >>> Hi everbody,
> >>>
> >>> I've just created a test setup for my company with two pc's drbd (0.8.3)
> >>> primary/primary mode with OCFS2 on top of it. the pc's are running suse
> >>> 10.2 with kernel 2.6.21.2. the Hardware CPU ~1GHz ~12 MBRam and one NIC
> >>> (100Mbit).
> >>>
> >>> The setup is working quite stable and so I started to make some
> >>> benchmarks and received some strange results:
> >>>
> >>> For the benchmark I used a small C-prog "iotest" that, in a loop, open a
> >>> file write on line and closes the file again.
> >>>
> >>> ---
> >>> #define LOOPCOUNT 1050620
> >>> ...
> >>>   for (i=0; i<LOOPCOUNT; i++) {
> >>>     if ( (logFD = open (LOGFILE, O_WRONLY|O_CREAT|O_APPEND, 0666)) !=
> >>> -1) {
> >>>       write (logFD, logMSG, logLEN);
> >>>       close (logFD);
> >>>   }
> >>> ---
> >>>
> >>>
> >>> 1. write to a ext3-fs
> >>>
> >>> 	| per line 	  | for 1050620 lines |
> >>> hobbes  | 44 nanoseconds  | 47.1 seconds      |
> >>> struppi | 37 nanoseconds  | 39.3 seconds      |
> >>>
> >>>
> >>> 2. write to a OCFS2-fs single -> only one maschiene at the time
> >>>
> >>> 	| per line 	  | for 1050620 lines |
> >>> hobbes  | 84 nanoseconds  | 88.3 seconds      |
> >>> struppi | 47 nanoseconds  | 50.1 seconds      |
> >>>
> >>>
> >>> 3. write to a OCFS2-fs into _ONE_ file -> concurrent write from both pcs
> >>>
> >>> 	| per line 	   | for 1050620 lines |
> >>> hobbes  | 113 milliseconds | 119764 seconds    | yes over 33h
> >>> struppi | 113 milliseconds | 119764 seconds    |
> >>>
> >>>
> >>> 4. write to a OCFS2-fs into two files -> each pc into one file
> >>>
> >>> 	| per line 	  | for 1050620 lines |
> >>> hobbes  | 93,6 nanoseconds  | 98.6 seconds    |
> >>> struppi | 59,5 nanoseconds  | 62.6 seconds    |
> >>>
> >>>
> >>> So as one can see OCFS2 does quite good but when it comes to concurrent
> >>> writes into one file it is sloooooowwwwwwww.
> >>> The question is now:
> >>>
> >>> - is there an error in my setup
> >>> - how can i tune ocfs2
> >>> - how can i tune dlm
> >>> - WHERE IS THE OCFS2 DOCU???(no im not talking about colorful powerpoint slides)
> >>>
> >>> Thanks in advance
> >>> Philipp
> >>>
> >>>
> >>> _______________________________________________
> >>> Ocfs2-users mailing list
> >>> Ocfs2-users at oss.oracle.com
> >>> http://oss.oracle.com/mailman/listinfo/ocfs2-users
> >>>   
> >>>       
> >
> >   
> 




More information about the Ocfs2-users mailing list