[DTrace-devel] [PATCH] Adjust dynvarsize to avoid hitting BPF size limits
Nick Alcock
nick.alcock at oracle.com
Thu Dec 9 13:40:19 UTC 2021
On 9 Dec 2021, Kris Van Hees via DTrace-devel told this:
> The number of entries in the dvars BPF map is calvulated as the
> dynvarsize divided by the size of the value type, which is 1. It
> turns out that there is a limit on the number of entries and this
> test hits that limit.
... presumably because of the number of CPUs?
This sounds rather limiting: you get failures if you store even a single
single-byte thing in a per-thread array? (And presumably even shorts are
getting close to the limit, and ints aren't that far off.)
That sounds like a feature users will not expect at all, and which will
bite them by default... though I guess it's rare to store single chars
anywhere, so maybe this isn't that problematical.
> THis patch sets dynvarsize to 1024 to force a lower limit so that
> this issue is not triggered anymore.
Perhaps "to force an upper bound" or "to forcibly reduce the limit"?
It's certainly not adding a *lower limit* (i.e. a lower bound) to
anything.
Really I think the code in dt_bpf.c should be bounding these things so
that failure doesn't happen. (But maybe that's for the next release.)
> Signed-off-by: Kris Van Hees <kris.van.hees at oracle.com>
With the caveat that this really shouldn't be necessary at all and
should be removed as soon as something better is done,
Reviewed-by: Nick Alcock <nick.alcock at oracle.com>
on the basis that it does fix the bug, but ew.
More information about the DTrace-devel
mailing list