[Tmem-devel] Implementing Transcendent Memory on Lguest

Dan Magenheimer dan.magenheimer at oracle.com
Tue Feb 9 10:53:18 PST 2010


Hi Guarav -

 

Yes, if Nitin is correct and you set up virtio correctly, using virtio for tmem should work for both lguest and KVM.  There is a possibility that there may be a small performance problem, but just getting it to work would be very interesting.


Thanks,

Dan

 

From: Gaurav Kukreja [mailto:mailme.gaurav at gmail.com] 
Sent: Tuesday, February 09, 2010 11:27 AM
To: Nitin Gupta
Cc: rusty at rustcorp.com.au; tmem-devel at oss.oracle.com
Subject: Re: [Tmem-devel] Implementing Transcendent Memory on Lguest

 

While I am trying to figure out, all what you people are saying, :-p.

 

I would like to know, if it would be beneficial, in anyway, to take this direction. As for my project, I am looking forward to implementation in lguest, but I would like to extend the project to KVM too.

 

On Tue, Feb 9, 2010 at 8:58 PM, Dan Magenheimer <HYPERLINK "mailto:dan.magenheimer at oracle.com"dan.magenheimer at oracle.com> wrote:

> From: Nitin Gupta [mailto:HYPERLINK "mailto:ngupta at vflare.org"ngupta at vflare.org]
>

> On Tue, Feb 9, 2010 at 4:49 AM, Dan Magenheimer
> <HYPERLINK "mailto:dan.magenheimer at oracle.com"dan.magenheimer at oracle.com> wrote:
> >> However, I could not understand your coherence issue: this vswap
> >> driver simply sends swap page to hypervisor synchronously.
> >
> > I am asking if anything in the virtio "layer" buffers
> > the data, making it appear to the client that the data
> > has been copied from/to the hypervisor (and all of tmem's
> > data structures updated in the hypervisor) but it really
> > hasn't happened yet. If so, bad things can happen.
> >
>
> I think we can set "virtqueue" size to 1 *and* make it work
> synchronously (this is what vswap does). This should avoid
> any buffering problem you are concerned with.

Excellent!


> On a side note, I think such simplistic behavior can result
> in poor performance but handling swapping case always
> brings surprises so maybe anything fancier -- multiple queues
> having large sizes, working asynchronously -- may be difficult
> to implement.

The tmem implementation in Xen has good concurrency
support, so multiple processors swapping to different
places in the same "swaptype" can proceed concurrently
if this is desired.

See "Swizzling" in the tmem 2.6.32 patch... up to
16 guest vcpus can be swapping simultaneously to
frontswap.

So I think for swapping (at least in Linux) there might be more
flexibility for asynchronicity but I don't think that
applies to cleancache.

Dan




-- 
Gaurav Kukreja

+91 997 030 1257
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/tmem-devel/attachments/20100209/94265ca1/attachment-0001.html 


More information about the Tmem-devel mailing list