[DTrace-devel] [PATCH CORRECTED] Implement normalize() and denormalize()

Kris Van Hees kris.van.hees at oracle.com
Thu Jan 14 20:35:28 PST 2021


On Thu, Jan 14, 2021 at 04:09:58PM -0800, Eugene Loh wrote:
> Copyright on new files should not have 2006.
> 
> Copyright on touched files should have 2021.

Thanks.

> I guess err.D_NORMALIZE_SCALAR.bad.d has become 
> err.D_PROTO_ARG.norm_non_scalar_normal.d .  The test got cleaned up in 
> the process;  that's fine.  What I'm curious about is that the error has 
> changed from DTv1 (from NORMALIZE_SCALAR to PROTO_ARG).  I haven't 
> looked closely at this issue, but given that it's a change in 
> user-visible behavior should there at least be a comment on this 
> change?  (And that's assuming the change is intentional.)  FWIW:
> 
>      DTv1
>          # dtrace -n 'BEGIN { @a = count(); normalize(@a, "a"); exit(0) }'
>          dtrace: invalid probe specifier BEGIN { @a = count(); 
> normalize(@a, "a"); exit(0) }:
>              normalize( ) argument #2 must be of scalar type
> 
>      DTv2
>          # dtrace -n 'BEGIN { @a = count(); normalize(@a, "a"); exit(0) }'
>          dtrace: invalid probe specifier BEGIN { @a = count(); 
> normalize(@a, "a"); exit(0) }:
>              in action list: normalize( ) argument #2 is incompatible 
> with prototype:
>           prototype: uint64_t
>            argument: string

Yes, that is a change that is not backwards compatible... And a change that
predates this patch.  The cause is that for unknown reasons, DTrace v1 has
normalize() defined with a prototype that accepts any datatype for the 2nd
argument, along with explicit type checking on that argument in the compiler.

That's not really ideal since the parser already provides type checking on
arguments for actions, and there is not need to use the any datatype and do
explicit checking.  Furthermore, the DTrace v1 behaviour is confusing because
it reports that the argument needs to be a scalar, but that covers integers,
enums, and pointers.  It really ought to be an integer given that it is used
as a numeric divider for the aggregation values.

Finally, all other actions report argument datatype issues as PROTO_ARG errors,
so it would sense for normalize() to not deviate from that.

That is why I did not reverse the behaviour back to the DTrace v1 behaviour.

> It's nice that we have more err.* test coverage, but the tst.* coverage 
> still seems scant.  In particular, we document "renormalization" 
> behavior:  if there are multiple normalize() calls, the last one wins.  
> How about at least a simple test in that respect?  I have no reason to 
> believe it will fail, but it's easy enough to add.

Yes, good point.  That is actually implicitly tested with the denormalize
test, but that is circumstantial and implementation-dependent, so I agree
that an explicit test is worthwhile here.  I'll add it.

> I don't know one way or the other, but in dt_consume_one() is the 
> following comment still relevant?
> 
>      /*
>       * FIXME: This code is temporary.
>       */

Hm, good point.  I'm going to leave it in for this patch but I do nneed to
revisit that.  The idea was that the code it refers to might best be placed
in a function, but then again, it might really be worth doing since it is
really only used in one place.

> In dt_normalize(), it seems pointless to define "act = rec->dtrd_arg" 
> for a single use of act.

Yes, but I do that to indicate (especially to myself) that the dtrd_arg (an
opaque value as far as data records are concerned) is actually an action
value.

> Reviewed-by: Eugene Loh <eugene.loh at oracle.com>

Thanks.



More information about the DTrace-devel mailing list