[DTrace-devel] [PATCH 4/4] spec: correct semantics of specsize overflow

Eugene Loh eugene.loh at oracle.com
Fri May 12 15:18:07 UTC 2023


On 5/11/23 19:05, Kris Van Hees wrote:

> On Thu, May 11, 2023 at 06:41:09PM -0400, Eugene Loh via DTrace-devel wrote:
>> On 5/11/23 16:30, Kris Van Hees wrote:
>>> On Thu, May 11, 2023 at 02:44:47PM -0400, Eugene Loh via DTrace-devel wrote:
>>>> On 5/9/23 03:19, Kris Van Hees via DTrace-devel wrote:
>>>>>     	 *		goto exit;
>>>>> -	 *	*((uint32_t *)&buf[4]) = spec;	// mov [%r9 + 4], %dn_reg
>>>>> +	 *	*((uint32_t *)&buf[DBUF_SPECID]) = specid;
>>>>> +	 *				// mov [%r9 + DBUF_SPECID], %dn_reg
>>>>> +	 *	dctx->dmst->specsize = spec->size;
>>>>>     	 *	exit:			// nop
>>>>>     	 */
>>>>>
>>>>> diff --git a/test/unittest/speculation/tst.SpecSizeVariations4.d b/test/unittest/speculation/tst.SpecSizeVariations4.d
>>>> Good, but:
>>>>
>>>> 1)  The test (fwiw) was passing.  Then in the last patch it failed. Now it's
>>>> passing again (this time properly).  From a testing perspective, there was a
>>>> regression in this test in the previous patch.  Ideally, that would not be
>>>> so.  Maybe xfail this test in the previous patch, even if that xfail is
>>>> short-lived?
>>> The expected results file was wrong.  The test was passing because it was
>>> looking for the wrong output.  This patch provides the correct implementation
>>> *and* ensures that the test is looking for the correct output.
>> Yes, but what I'm saying is that the previous patch would introduce a new
>> test failure.  So from a tester's viewpoint, the previous patch introduced a
>> regression.  It's preferable to have all tests pass with each patch.  If
>> that previous patch exposed a test problem, then it should mark it with
>> xfail.  Then this patch resolves the xfail.
> Yes, but again this test was not even valid to begin with due to the entirely
> incorrect expected results.

No disagreement there.  I'm just imagining someone in the future hunting 
down a bug, getting to the previous patch, and seeing a test failure 
that's new with the patch.  So that patch can reflect the understanding 
we have today by marking that test as an xfail. Patches aren't supposed 
to make the number of failures increase.

>>>> 3)  The test is weak in that it checks only speculations in a single
>>>> clause.  How about spreading the speculative data over multiple clauses?
>>> But the whole point of the test *is* to exercise the semantics of an overflow
>>> occuring in a clause.
>>>
>>> But yes, we do need more (and better) tests for the various speculation buffer
>>> sizing effects vs the amount of data recorded in a speculation.  It seems like
>>> they used to exist or were planned to be written (see the comment block in the
>>> err.BufSizeVariations1.d test).
>>>
>>> This particular patch is addressing a specific problem - but I do agree we need
>>> to add more tests.
>> Maybe I was mistaken about something.  Let's say multiple clauses all
>> contribute to the same speculation.  Individually, each one is below the
>> specsize limit, but combined they exceed the limit.  Is there a problem?
>> What should happen?
> That will be handled (and there is actually code for that both on the producer
> and consumer side).  This is why the running count of data stored in the
> speculation is stored in dctx->dmst->specsize.

So where is the test for that?  There need to be tests for the things we 
claim to support.



More information about the DTrace-devel mailing list