[Tmem-devel] Implementing Transcendent Memory on Lguest

Dan Magenheimer dan.magenheimer at oracle.com
Tue Feb 9 07:28:48 PST 2010


> From: Nitin Gupta [mailto:ngupta at vflare.org]
> 
> On Tue, Feb 9, 2010 at 4:49 AM, Dan Magenheimer
> <dan.magenheimer at oracle.com> wrote:
> >> However, I could not understand your coherence issue: this vswap
> >> driver simply sends swap page to hypervisor synchronously.
> >
> > I am asking if anything in the virtio "layer" buffers
> > the data, making it appear to the client that the data
> > has been copied from/to the hypervisor (and all of tmem's
> > data structures updated in the hypervisor) but it really
> > hasn't happened yet. If so, bad things can happen.
> >
> 
> I think we can set "virtqueue" size to 1 *and* make it work
> synchronously (this is what vswap does). This should avoid
> any buffering problem you are concerned with.

Excellent!
 
> On a side note, I think such simplistic behavior can result
> in poor performance but handling swapping case always
> brings surprises so maybe anything fancier -- multiple queues
> having large sizes, working asynchronously -- may be difficult
> to implement.

The tmem implementation in Xen has good concurrency
support, so multiple processors swapping to different
places in the same "swaptype" can proceed concurrently
if this is desired.

See "Swizzling" in the tmem 2.6.32 patch... up to
16 guest vcpus can be swapping simultaneously to
frontswap.

So I think for swapping (at least in Linux) there might be more
flexibility for asynchronicity but I don't think that
applies to cleancache.

Dan



More information about the Tmem-devel mailing list