AW: AW: [Ocfs-users] OCFS 1.0.9-6 performance with EVA 6000 Storage

Wim Coekaerts wim.coekaerts at oracle.com
Wed Jun 2 03:20:29 CDT 2004


well the Real problem is that we have a heartbeat per montpoint which is
ridiculous but well, inherited and we ll have to live with that for a
while still. at some point we need to get rid of that

i dont think there is anything to read up on yet, the change for
heartbeat is very minimal

Wim

On Wed, Jun 02, 2004 at 11:09:08AM +0200, Magnus Lubeck wrote:
> Ok,
> 
> So in Jeram's case he still have 1000 I/O's per second due to hartbeat (5
> nodes times 4 i/o per second per node times 51 mounts). This will not
> exhaust the FC infrastructure, but probably the disk array, as stated
> before.
> 
> If I'm not wrong, you can usually not exhaust a disk for more than 30 - 50
> I/O's per second (depending on many things, sometimes up to 120 I/O's, but
> not much more). Well, the controller is probably configured "write behind",
> which is not really touching the disks all the time, so the controller
> should probably be able to handle it. Also some vendors turn off the on disk
> write cache (e.g Sun) for security reasons, which will keep down the per
> disk I/O (at least for writes) to a minimum.
> 
> Hmmm... feels like I'm wandering off from the topic.
> 
> The problem with hartbeating is that one would probably like to keep it down
> to a minimum as well as ensuring availability. Also, health checking a
> larger number of volumes/luns which reside on the same physical storage is
> redundant, but inevitable since the underlaying hardware structure is hidden
> from the OS.
> 
> I assume it will be interesting to see the (configurable) solution for this
> in the upcoming version. Or are there pointers on where I can do some
> reading on this already (source code or similar in "worst case")? 
> 
> As such, 51 mountpoints is not That many in a large environment, especially
> when talking about the limitations that do exist when using OCFS.
> 
> Thanks,
> //magnus
> 
> -----Urspr?ngliche Nachricht-----
> Von: Wim Coekaerts [mailto:wim.coekaerts at oracle.com] 
> Gesendet: Mittwoch, 2. Juni 2004 10:31
> An: Magnus Lubeck
> Cc: ocfs-users at oss.oracle.com
> Betreff: Re: AW: [Ocfs-users] OCFS 1.0.9-6 performance with EVA 6000 Storage
> 
> Hi Magnus
> 
> actually no it's not That bad
> it's 1 read and 1 write, we do a 16kb read, not 32 sepereate reads.
> so you have 4 read/write transactiosn per node per second per lun
> 
> eva is the replacement (new version) of msa1000's
> 
> Wim
> 
> On Wed, Jun 02, 2004 at 10:09:58AM +0200, Magnus Lubeck wrote:
> > Hi all,
> > 
> > Sorry to break in, but I find this thread a bit interesting.
> > 
> > Jeram: I'm not very familiar with HP storage and cannot find too much info
> > on the EVA 6000 array. Is it related to the EVA 5000 somehow, or is it a
> NAS
> > array?
> > 
> > In any case, how is the array configured. If the algorithm for hartbeat is
> > as described earlier (36 sector reads and one write per second (per
> > host???)) then you have some 37 I/O's per second per volume, which in your
> > case is close to 2000 I/O's per second PER box, which could easily be
> close
> > to 10k I/O per second if the hartbeat is per node.
> > 
> > Am I right in this assumption? In Jeram's case, having 5 nodes, 51 mounts.
> > Would the hartbeat generate 2k or 10k I/O's?
> > 
> > In any case, if I'm correct, this would then (as Wim states) be somewhat
> > exhausting for most parts of the storage system. Even 2000 I/O's per
> second
> > could easily exhaust a LUN group in most arrays.
> > 
> > Thanks for an interesting discussion.
> > 
> > //magnus
> > 
> > -----Urspr?ngliche Nachricht-----
> > Von: ocfs-users-bounces at oss.oracle.com
> > [mailto:ocfs-users-bounces at oss.oracle.com] Im Auftrag von Jeram
> > Gesendet: Mittwoch, 2. Juni 2004 03:53
> > An: Wim Coekaerts
> > Cc: Sunil Mushran; ocfs-users at oss.oracle.com
> > Betreff: RE: [Ocfs-users] OCFS 1.0.9-6 performance with EVA 6000 Storage
> > 
> > Hi Wim...
> > 
> > Ok Then...I will try .11 first, and awaiting for .12,meanwhile I am
> waiting
> > for HP Engineers whether they have any good idea from eva6000 point of
> > view..
> > 
> > Thanks a lot for your informations.
> > Rgds/Jeram
> > 
> > -----Original Message-----
> > From: Wim Coekaerts [mailto:wim.coekaerts at oracle.com]
> > Sent: Wednesday, June 02, 2004 8:41 AM
> > To: Jeram
> > Cc: Sunil Mushran; ocfs-users at oss.oracle.com
> > Subject: Re: [Ocfs-users] OCFS 1.0.9-6 performance with EVA 6000 Storage
> > 
> > 
> > 1.0.11 won't change amount of io. if you already have io problems you
> > have to use .12, which should be out any day..  
> > 
> > 
> > On Wed, Jun 02, 2004 at 08:35:49AM +0700, Jeram wrote:
> > > Hi Sunil...
> > > 
> > > Thanks for your response, I will try to use 1.0.11, and observe the
> > > performance...
> > > 
> > > Rgds/Jeram
> > > 
> > > -----Original Message-----
> > > From: Sunil Mushran [mailto:Sunil.Mushran at oracle.com]
> > > Sent: Wednesday, June 02, 2004 8:30 AM
> > > To: Jeram
> > > Cc: ocfs-users at oss.oracle.com; ocfs-devel at oss.oracle.com
> > > Subject: Re: [Ocfs-users] OCFS 1.0.9-6 performance with EVA 6000 Storage
> > > 
> > > 
> > > Heartbeating in ocfs is currently per volume. The nmthread reads 36
> > > sectors and writes 1 sector every second or so. The io in vmstat you see
> > > is due to heartbeat.
> > > 
> > > As far as the mount is concerned, the mount thread waits for the
> > > nmthread the stabilize, 10 secs or so.
> > > 
> > > We are working on making the heartbeat configurable. 1.0.12 will have
> > > some stuff regarding that.... hb and timeout values. It will not be
> > > activated by default. We are still working out the details. That will
> > > reduce the hb related io.
> > > 
> > > If you want to use 51 mounts, make sure your hardware can handle the io.
> > > For e.g., if you see ocfs msgs like, "Removing nodes" and "Adding nodes"
> > > without a node performing any mount/umount, you have a problem. In
> > > anycase, you should use 1.0.11 at the least. In 1.0.10, we doubled the
> > > timeout from 1.0.9.
> > > 
> > > Hope this helps.
> > > Sunil
> > > 
> > > On Tue, 2004-06-01 at 18:04, Jeram wrote:
> > > > Dear All...
> > > > 
> > > > I need some information regarding OCFS performance in my Linux Box,
> > > herewith
> > > > is my environment details :
> > > > 1. We are using RHAS 2.1 with kernel 2.4.9-e.27 Enterprise
> > > > 2. OCFS version : 2.4.9-e-enterprise-1.0.9-6
> > > > 3. Oracle RDBMS : 9.2.0.4 RAC with 5 Nodes
> > > > 4. Storage = EVA 6000 with 8 TB SIZE
> > > > 5. We have 1 DiskGroup and 51 LUNs configured in EVA6000.
> > > > My Question is :
> > > > 1. It takes arround 15 minutes to mount arround 51 ocfs file system,
> is
> > > this
> > > > a normal situation?
> > > > 2. I monitor the OS using VMSTAT without starting the RAC server,
> column
> > > IO
> > > > (bo and bi) it's giving 3 digits value continuously, then I unmount
> all
> > > the
> > > > OCFS filesystem, again monitor the IO using VMSTAT,  column IO (bo and
> > bi)
> > > > it's giving 1 digits value, any idea why this is happen?
> > > > I have raised this issue to HP engineers who provide the HW, have not
> > got
> > > > the answer yet.
> > > > Thanks in advance
> > > > Rgds/Jeram  
> > > >  
> > > > 
> > > > 
> > > > 
> > > > _______________________________________________
> > > > Ocfs-users mailing list
> > > > Ocfs-users at oss.oracle.com
> > > > http://oss.oracle.com/mailman/listinfo/ocfs-users
> > > _______________________________________________
> > > Ocfs-users mailing list
> > > Ocfs-users at oss.oracle.com
> > > http://oss.oracle.com/mailman/listinfo/ocfs-users
> > _______________________________________________
> > Ocfs-users mailing list
> > Ocfs-users at oss.oracle.com
> > http://oss.oracle.com/mailman/listinfo/ocfs-users
> > 
> > 
> > _______________________________________________
> > Ocfs-users mailing list
> > Ocfs-users at oss.oracle.com
> > http://oss.oracle.com/mailman/listinfo/ocfs-users
> 
> 
> 



More information about the Ocfs-users mailing list