<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<div>Reviewed-by: Eugene Loh <eugene.loh@oracle.com></div>
<div><br>
</div>
<div class="elementToProof">We have three copies of aggregation data: the BPF map, the snapshot, and the user-space copy. So, in dt_aggregate_go(), how about</div>
<div>- /* Allocate a buffer to hold the aggregation data for a CPU. */</div>
<div>+ /* Allocate a buffer to hold the snapshot data for a CPU. */</div>
<div><br>
</div>
<div class="elementToProof">In gmap_create_aggs(), you call create_gmap_of_maps() with an osize of dtp->dt_conf.num_online_cpus. Is that right? This limits the "outer" keys to [0:dtp->dt_conf.num_online_cpus). But then</div>
<div> for (i = 0; i < dtp->dt_conf.num_online_cpus; i++) {</div>
<div> int cpu = dtp->dt_conf.cpus[i].cpu_id;</div>
<div> int fd = ...;</div>
<div><br>
</div>
<div> dt_bpf_map_update(dtp->dt_aggmap_fd, &cpu, &fd);</div>
<div> }</div>
<div class="elementToProof">Will cpu always fit inside the given range?</div>
<div><br>
</div>
<div class="elementToProof">In dt_cg_tramp_prologue_act(), I continue to maintain that the details in the big comment block are obfuscating: they are no more clear than the code itself and therefore are confusing. I keep checking the comments against the
code rather than the other way around. Whatever. In any case, the comment for the result of "call bpf_get_smp_processor_id" is said to be</div>
<div> (%r0 = 'aggs' BPF map value)</div>
<div class="elementToProof">That is apparently a cut-and-paste error. And the final instruction is described as</div>
<div> dctx.aggs = rc; // stdw [%r9 + offset], %r0</div>
But there is no more "offset"... it should probably be s/offset/DCTX_AGG/.<br>
</div>
<div id="appendonsend"></div>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> Kris Van Hees via DTrace-devel <dtrace-devel@oss.oracle.com><br>
<b>Sent:</b> Tuesday, August 23, 2022 2:49 PM<br>
<b>To:</b> dtrace-devel@oss.oracle.com <dtrace-devel@oss.oracle.com><br>
<b>Subject:</b> [DTrace-devel] [PATCH v2 5/5] Use array-of-maps as storage for aggregations</font>
<div> </div>
</div>
<div class="BodyFragment"><font size="2"><span style="font-size:11pt;">
<div class="PlainText">In preparation for indexed aggregations and the clear() and trunc()<br>
operations, the storage for aggregations is moving from a per-CPU<br>
array map to an array of maps, indexed by CPU id.<br>
<br>
The existing storage solution for aggregations stored all data in a<br>
singleton map value, i.e. all CPUs were writing to their own portion<br>
of a block of memory that the consumer retrieved in its entirety in<br>
a single system call.<br>
<br>
The new storage solution allocates a memory block for each CPU so<br>
that data retrieval by the consumer can be done per CPU. This sets<br>
the stage for future development where the consumer may need to<br>
update the aggregation buffers.<br>
<br>
Signed-off-by: Kris Van Hees <kris.van.hees@oracle.com><br>
---<br>
libdtrace/dt_aggregate.c | 95 ++++++++++++++--------------------------<br>
libdtrace/dt_bpf.c | 69 ++++++++++++++++++-----------<br>
libdtrace/dt_cg.c | 53 +++++++++++++++++++++-<br>
libdtrace/dt_impl.h | 1 -<br>
4 files changed, 129 insertions(+), 89 deletions(-)<br>
<br>
diff --git a/libdtrace/dt_aggregate.c b/libdtrace/dt_aggregate.c<br>
index 44896fd2..14d16da6 100644<br>
--- a/libdtrace/dt_aggregate.c<br>
+++ b/libdtrace/dt_aggregate.c<br>
@@ -412,8 +412,6 @@ typedef void (*agg_cpu_f)(dt_ident_t *aid, int64_t *dst, int64_t *src,<br>
typedef struct dt_snapstate {<br>
dtrace_hdl_t *dtp;<br>
processorid_t cpu;<br>
- char *buf;<br>
- dt_aggregate_t *agp;<br>
} dt_snapstate_t;<br>
<br>
static void<br>
@@ -444,7 +442,9 @@ dt_agg_one_agg(dt_ident_t *aid, int64_t *dst, int64_t *src, uint_t datasz)<br>
static int<br>
dt_aggregate_snap_one(dt_idhash_t *dhp, dt_ident_t *aid, dt_snapstate_t *st)<br>
{<br>
- dt_ahash_t *agh = &st->agp->dtat_hash;<br>
+ dtrace_hdl_t *dtp = st->dtp;<br>
+ dt_aggregate_t *agp = &dtp->dt_aggregate;<br>
+ dt_ahash_t *agh = &agp->dtat_hash;<br>
dt_ahashent_t *h;<br>
dtrace_aggdesc_t *agg;<br>
dtrace_aggdata_t *agd;<br>
@@ -454,12 +454,12 @@ dt_aggregate_snap_one(dt_idhash_t *dhp, dt_ident_t *aid, dt_snapstate_t *st)<br>
uint_t i, datasz;<br>
int64_t *src;<br>
<br>
- rval = dt_aggid_lookup(st->dtp, aid->di_id, &agg);<br>
+ rval = dt_aggid_lookup(dtp, aid->di_id, &agg);<br>
if (rval != 0)<br>
return rval;<br>
<br>
/* point to the data counter */<br>
- src = (int64_t *)(st->buf + aid->di_offset);<br>
+ src = (int64_t *)(agp->dtat_buf + aid->di_offset);<br>
<br>
/* skip it if data counter is 0 */<br>
if (*src == 0)<br>
@@ -487,46 +487,45 @@ dt_aggregate_snap_one(dt_idhash_t *dhp, dt_ident_t *aid, dt_snapstate_t *st)<br>
}<br>
<br>
/* add it to the hash table */<br>
- h = dt_zalloc(st->dtp, sizeof(dt_ahashent_t));<br>
+ h = dt_zalloc(dtp, sizeof(dt_ahashent_t));<br>
if (h == NULL)<br>
- return dt_set_errno(st->dtp, EDT_NOMEM);<br>
+ return dt_set_errno(dtp, EDT_NOMEM);<br>
<br>
agd = &h->dtahe_data;<br>
- agd->dtada_data = dt_alloc(st->dtp, datasz);<br>
+ agd->dtada_data = dt_alloc(dtp, datasz);<br>
if (agd->dtada_data == NULL) {<br>
- dt_free(st->dtp, h);<br>
- return dt_set_errno(st->dtp, EDT_NOMEM);<br>
+ dt_free(dtp, h);<br>
+ return dt_set_errno(dtp, EDT_NOMEM);<br>
}<br>
<br>
memcpy(agd->dtada_data, src, datasz);<br>
agd->dtada_size = datasz;<br>
agd->dtada_desc = agg;<br>
- agd->dtada_hdl = st->dtp;<br>
+ agd->dtada_hdl = dtp;<br>
<br>
h->dtahe_hval = hval;<br>
h->dtahe_size = datasz;<br>
<br>
- if (st->agp->dtat_flags & DTRACE_A_PERCPU) {<br>
- char **percpu = dt_calloc(st->dtp,<br>
- st->dtp->dt_conf.max_cpuid + 1,<br>
+ if (agp->dtat_flags & DTRACE_A_PERCPU) {<br>
+ char **percpu = dt_calloc(dtp, dtp->dt_conf.max_cpuid + 1,<br>
sizeof(char *));<br>
<br>
if (percpu == NULL) {<br>
- dt_free(st->dtp, agd->dtada_data);<br>
- dt_free(st->dtp, h);<br>
+ dt_free(dtp, agd->dtada_data);<br>
+ dt_free(dtp, h);<br>
<br>
- dt_set_errno(st->dtp, EDT_NOMEM);<br>
+ dt_set_errno(dtp, EDT_NOMEM);<br>
}<br>
<br>
- for (i = 0; i <= st->dtp->dt_conf.max_cpuid; i++) {<br>
- percpu[i] = dt_zalloc(st->dtp, datasz);<br>
+ for (i = 0; i <= dtp->dt_conf.max_cpuid; i++) {<br>
+ percpu[i] = dt_zalloc(dtp, datasz);<br>
if (percpu[i] == NULL) {<br>
while (--i >= 0)<br>
- dt_free(st->dtp, percpu[i]);<br>
- dt_free(st->dtp, agd->dtada_data);<br>
- dt_free(st->dtp, h);<br>
+ dt_free(dtp, percpu[i]);<br>
+ dt_free(dtp, agd->dtada_data);<br>
+ dt_free(dtp, h);<br>
<br>
- dt_set_errno(st->dtp, EDT_NOMEM);<br>
+ dt_set_errno(dtp, EDT_NOMEM);<br>
}<br>
}<br>
<br>
@@ -553,14 +552,15 @@ dt_aggregate_snap_one(dt_idhash_t *dhp, dt_ident_t *aid, dt_snapstate_t *st)<br>
static int<br>
dt_aggregate_snap_cpu(dtrace_hdl_t *dtp, processorid_t cpu)<br>
{<br>
- dt_aggregate_t *agp = &dtp->dt_aggregate;<br>
- char *buf = agp->dtat_cpu_buf[cpu];<br>
dt_snapstate_t st;<br>
+ uint32_t key = 0;<br>
<br>
st.dtp = dtp;<br>
st.cpu = cpu;<br>
- st.buf = buf;<br>
- st.agp = agp;<br>
+<br>
+ if (dt_bpf_map_lookup_inner(dtp->dt_aggmap_fd, &cpu, &key,<br>
+ dtp->dt_aggregate.dtat_buf) == -1)<br>
+ return 0;<br>
<br>
return dt_idhash_iter(dtp->dt_aggs,<br>
(dt_idhash_f *)dt_aggregate_snap_one, &st);<br>
@@ -573,22 +573,17 @@ int<br>
dtrace_aggregate_snap(dtrace_hdl_t *dtp)<br>
{<br>
dt_aggregate_t *agp = &dtp->dt_aggregate;<br>
- uint32_t key = 0;<br>
int i, rval;<br>
<br>
/*<br>
* If we do not have a buffer initialized, we will not be processing<br>
* aggregations, so there is nothing to be done here.<br>
*/<br>
- if (agp->dtat_cpu_buf == NULL)<br>
+ if (agp->dtat_buf == NULL)<br>
return 0;<br>
<br>
dtrace_aggregate_clear(dtp);<br>
<br>
- rval = dt_bpf_map_lookup(dtp->dt_aggmap_fd, &key, agp->dtat_buf);<br>
- if (rval != 0)<br>
- return dt_set_errno(dtp, -rval);<br>
-<br>
for (i = 0; i < dtp->dt_conf.num_online_cpus; i++) {<br>
rval = dt_aggregate_snap_cpu(dtp, dtp->dt_conf.cpus[i].cpu_id);<br>
if (rval != 0)<br>
@@ -999,41 +994,22 @@ dt_aggregate_go(dtrace_hdl_t *dtp)<br>
dt_aggregate_t *agp = &dtp->dt_aggregate;<br>
dt_ahash_t *agh = &agp->dtat_hash;<br>
int aggsz, i;<br>
- uint32_t key = 0;<br>
<br>
/* If there are no aggregations there is nothing to do. */<br>
aggsz = dt_idhash_datasize(dtp->dt_aggs);<br>
if (aggsz <= 0)<br>
return 0;<br>
<br>
- /*<br>
- * Allocate a buffer to hold the aggregation data for all possible<br>
- * CPUs, and initialize the per-CPU data pointers for CPUs that are<br>
- * currently enabled.<br>
- */<br>
- agp->dtat_buf = dt_zalloc(dtp, dtp->dt_conf.num_possible_cpus * aggsz);<br>
+ /* Allocate a buffer to hold the aggregation data for a CPU. */<br>
+ agp->dtat_buf = dt_zalloc(dtp, aggsz);<br>
if (agp->dtat_buf == NULL)<br>
return dt_set_errno(dtp, EDT_NOMEM);<br>
<br>
- agp->dtat_cpu_buf = dt_calloc(dtp, dtp->dt_conf.max_cpuid + 1,<br>
- sizeof(char *));<br>
- if (agp->dtat_cpu_buf == NULL) {<br>
- dt_free(dtp, agp->dtat_buf);<br>
- return dt_set_errno(dtp, EDT_NOMEM);<br>
- }<br>
-<br>
- for (i = 0; i < dtp->dt_conf.num_online_cpus; i++) {<br>
- int cpu = dtp->dt_conf.cpus[i].cpu_id;<br>
-<br>
- agp->dtat_cpu_buf[cpu] = agp->dtat_buf + cpu * aggsz;<br>
- }<br>
-<br>
/* Create the aggregation hash. */<br>
agh->dtah_size = DTRACE_AHASHSIZE;<br>
agh->dtah_hash = dt_zalloc(dtp,<br>
agh->dtah_size * sizeof(dt_ahashent_t *));<br>
if (agh->dtah_hash == NULL) {<br>
- dt_free(dtp, agp->dtat_cpu_buf);<br>
dt_free(dtp, agp->dtat_buf);<br>
return dt_set_errno(dtp, EDT_NOMEM);<br>
}<br>
@@ -1045,15 +1021,13 @@ dt_aggregate_go(dtrace_hdl_t *dtp)<br>
return 0;<br>
*(int64_t *)agp->dtat_buf = 0; /* clear the flag */<br>
for (i = 0; i < dtp->dt_conf.num_online_cpus; i++) {<br>
- int cpu = dtp->dt_conf.cpus[i].cpu_id;<br>
+ int cpu = dtp->dt_conf.cpus[i].cpu_id;<br>
+ uint32_t key = 0;<br>
<br>
- /* Data for CPU 0 was populated, so skip it. */<br>
- if (cpu == 0)<br>
+ if (dt_bpf_map_update_inner(dtp->dt_aggmap_fd, &cpu, &key,<br>
+ dtp->dt_aggregate.dtat_buf) == -1)<br>
continue;<br>
-<br>
- memcpy(agp->dtat_cpu_buf[cpu], agp->dtat_buf, aggsz);<br>
}<br>
- dt_bpf_map_update(dtp->dt_aggmap_fd, &key, agp->dtat_buf);<br>
<br>
return 0;<br>
}<br>
@@ -1820,6 +1794,5 @@ dt_aggregate_destroy(dtrace_hdl_t *dtp)<br>
hash->dtah_size = 0;<br>
}<br>
<br>
- dt_free(dtp, agp->dtat_cpu_buf);<br>
dt_free(dtp, agp->dtat_buf);<br>
}<br>
diff --git a/libdtrace/dt_bpf.c b/libdtrace/dt_bpf.c<br>
index 7dea7179..a31ddf95 100644<br>
--- a/libdtrace/dt_bpf.c<br>
+++ b/libdtrace/dt_bpf.c<br>
@@ -351,6 +351,22 @@ dt_bpf_init_helpers(dtrace_hdl_t *dtp)<br>
#undef BPF_HELPER_MAP<br>
}<br>
<br>
+static int<br>
+map_create_error(dtrace_hdl_t *dtp, const char *name, int err)<br>
+{<br>
+ char msg[64];<br>
+<br>
+ snprintf(msg, sizeof(msg),<br>
+ "failed to create BPF map '%s'", name);<br>
+<br>
+ if (err == E2BIG)<br>
+ return dt_bpf_error(dtp, "%s: Too big\n", msg);<br>
+ if (err == EPERM)<br>
+ return dt_bpf_lockmem_error(dtp, msg);<br>
+<br>
+ return dt_bpf_error(dtp, "%s: %s\n", msg, strerror(err));<br>
+}<br>
+<br>
static int<br>
create_gmap(dtrace_hdl_t *dtp, const char *name, enum bpf_map_type type,<br>
size_t ksz, size_t vsz, size_t size)<br>
@@ -369,17 +385,8 @@ create_gmap(dtrace_hdl_t *dtp, const char *name, enum bpf_map_type type,<br>
err = errno;<br>
}<br>
<br>
- if (fd < 0) {<br>
- char msg[64];<br>
-<br>
- snprintf(msg, sizeof(msg),<br>
- "failed to create BPF map '%s'", name);<br>
- if (err == E2BIG)<br>
- return dt_bpf_error(dtp, "%s: Too big\n", msg);<br>
- if (err == EPERM)<br>
- return dt_bpf_lockmem_error(dtp, msg);<br>
- return dt_bpf_error(dtp, "%s: %s\n", msg, strerror(err));<br>
- }<br>
+ if (fd < 0)<br>
+ return map_create_error(dtp, name, err);<br>
<br>
dt_dprintf("BPF map '%s' is FD %d\n", name, fd);<br>
<br>
@@ -421,17 +428,8 @@ create_gmap_of_maps(dtrace_hdl_t *dtp, const char *name,<br>
err = errno;<br>
}<br>
<br>
- if (fd < 0) {<br>
- char msg[64];<br>
-<br>
- snprintf(msg, sizeof(msg),<br>
- "failed to create BPF map '%s'", name);<br>
- if (err == E2BIG)<br>
- return dt_bpf_error(dtp, "%s: Too big\n", msg);<br>
- if (err == EPERM)<br>
- return dt_bpf_lockmem_error(dtp, msg);<br>
- return dt_bpf_error(dtp, "%s: %s\n", msg, strerror(err));<br>
- }<br>
+ if (fd < 0)<br>
+ return map_create_error(dtp, name, err);<br>
<br>
dt_dprintf("BPF map '%s' is FD %d\n", name, fd);<br>
<br>
@@ -470,19 +468,40 @@ gmap_create_state(dtrace_hdl_t *dtp)<br>
* Create the 'aggs' BPF map.<br>
*<br>
* Aggregation data buffer map, associated with each CPU. The map is<br>
- * implemented as a global per-CPU map with a singleton element (key 0).<br>
+ * implemented as a global array-of-maps indexed by CPU id. The associated<br>
+ * value is a map with a singleton element (key 0).<br>
*/<br>
static int<br>
gmap_create_aggs(dtrace_hdl_t *dtp)<br>
{<br>
size_t sz = dt_idhash_datasize(dtp->dt_aggs);<br>
+ int i;<br>
<br>
/* Only create the map if it is used. */<br>
if (sz == 0)<br>
return 0;<br>
<br>
- dtp->dt_aggmap_fd = create_gmap(dtp, "aggs", BPF_MAP_TYPE_PERCPU_ARRAY,<br>
- sizeof(uint32_t), sz, 1);<br>
+ dtp->dt_aggmap_fd = create_gmap_of_maps(dtp, "aggs",<br>
+ BPF_MAP_TYPE_ARRAY_OF_MAPS,<br>
+ sizeof(uint32_t),<br>
+ dtp->dt_conf.num_online_cpus,<br>
+ BPF_MAP_TYPE_ARRAY,<br>
+ sizeof(uint32_t), sz, 1);<br>
+<br>
+ for (i = 0; i < dtp->dt_conf.num_online_cpus; i++) {<br>
+ int cpu = dtp->dt_conf.cpus[i].cpu_id;<br>
+ char name[16];<br>
+ int fd;<br>
+<br>
+ snprintf(name, 16, "aggs_%d", cpu);<br>
+ fd = dt_bpf_map_create(BPF_MAP_TYPE_ARRAY, name,<br>
+ sizeof(uint32_t), sz, 1, 0);<br>
+ if (fd < 0)<br>
+ return map_create_error(dtp, name, errno);<br>
+<br>
+ dt_bpf_map_update(dtp->dt_aggmap_fd, &cpu, &fd);<br>
+ }<br>
+<br>
<br>
return dtp->dt_aggmap_fd;<br>
}<br>
diff --git a/libdtrace/dt_cg.c b/libdtrace/dt_cg.c<br>
index 0963f202..157f4861 100644<br>
--- a/libdtrace/dt_cg.c<br>
+++ b/libdtrace/dt_cg.c<br>
@@ -52,11 +52,13 @@ dt_cg_tramp_prologue_act(dt_pcb_t *pcb, dt_activity_t act)<br>
{<br>
dtrace_hdl_t *dtp = pcb->pcb_hdl;<br>
dt_irlist_t *dlp = &pcb->pcb_ir;<br>
+ dt_ident_t *aggs = dt_dlib_get_map(dtp, "aggs");<br>
dt_ident_t *mem = dt_dlib_get_map(dtp, "mem");<br>
dt_ident_t *state = dt_dlib_get_map(dtp, "state");<br>
dt_ident_t *prid = dt_dlib_get_var(pcb->pcb_hdl, "PRID");<br>
uint_t lbl_exit = pcb->pcb_exitlbl;<br>
<br>
+ assert(aggs != NULL);<br>
assert(mem != NULL);<br>
assert(state != NULL);<br>
assert(prid != NULL);<br>
@@ -206,13 +208,60 @@ dt_cg_tramp_prologue_act(dt_pcb_t *pcb, dt_activity_t act)<br>
DT_CG_STORE_MAP_PTR("strtab", DCTX_STRTAB);<br>
if (dtp->dt_options[DTRACEOPT_SCRATCHSIZE] > 0)<br>
DT_CG_STORE_MAP_PTR("scratchmem", DCTX_SCRATCHMEM);<br>
- if (dt_idhash_datasize(dtp->dt_aggs) > 0)<br>
- DT_CG_STORE_MAP_PTR("aggs", DCTX_AGG);<br>
if (dt_idhash_datasize(dtp->dt_globals) > 0)<br>
DT_CG_STORE_MAP_PTR("gvars", DCTX_GVARS);<br>
if (dtp->dt_maxlvaralloc > 0)<br>
DT_CG_STORE_MAP_PTR("lvars", DCTX_LVARS);<br>
#undef DT_CG_STORE_MAP_PTR<br>
+<br>
+ /*<br>
+ * Aggregation data is stored in a CPU-specific BPF map. Populate<br>
+ * dctx->agg with the map for the current CPU.<br>
+ *<br>
+ * key = bpf_get_smp_processor_id()<br>
+ * // call bpf_get_smp_processor_id<br>
+ * // (%r1 ... %r5 clobbered)<br>
+ * // (%r0 = 'aggs' BPF map value)<br>
+ * // stw [%r9 + DCTX_AGG], %r0<br>
+ * rc = bpf_map_lookup_elem(&aggs, &key);<br>
+ * // lddw %r1, &aggs<br>
+ * // mov %r2, %r9<br>
+ * // add %r2, DCTX_AGG<br>
+ * // call bpf_map_lookup_elem<br>
+ * // (%r1 ... %r5 clobbered)<br>
+ * // (%r0 = 'aggs' BPF map value)<br>
+ * if (rc == 0) // jeq %r0, 0, lbl_exit<br>
+ * goto exit;<br>
+ *<br>
+ * key = 0; // stw [%r9 + DCTX_AGG], 0<br>
+ * rc = bpf_map_lookup_elem(rc, &key);<br>
+ * // mov %r1, %r0<br>
+ * // mov %r2, %r9<br>
+ * // add %r2, DCTX_AGG<br>
+ * // call bpf_map_lookup_elem<br>
+ * // (%r1 ... %r5 clobbered)<br>
+ * // (%r0 = aggs[cpuid] BPF map value)<br>
+ * if (rc == 0) // jeq %r0, 0, lbl_exit<br>
+ * goto exit;<br>
+ *<br>
+ * dctx.aggs = rc; // stdw [%r9 + offset], %r0<br>
+ */<br>
+ if (dt_idhash_datasize(dtp->dt_aggs) > 0) {<br>
+ emit(dlp, BPF_CALL_HELPER(BPF_FUNC_get_smp_processor_id));<br>
+ emit(dlp, BPF_STORE(BPF_DW, BPF_REG_9, DCTX_AGG, BPF_REG_0));<br>
+ dt_cg_xsetx(dlp, aggs, DT_LBL_NONE, BPF_REG_1, aggs->di_id);<br>
+ emit(dlp, BPF_MOV_REG(BPF_REG_2, BPF_REG_9));<br>
+ emit(dlp, BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, DCTX_AGG));<br>
+ emit(dlp, BPF_CALL_HELPER(BPF_FUNC_map_lookup_elem));<br>
+ emit(dlp, BPF_BRANCH_IMM(BPF_JEQ, BPF_REG_0, 0, lbl_exit));<br>
+ emit(dlp, BPF_STORE_IMM(BPF_DW, BPF_REG_9, DCTX_AGG, 0));<br>
+ emit(dlp, BPF_MOV_REG(BPF_REG_1, BPF_REG_0));<br>
+ emit(dlp, BPF_MOV_REG(BPF_REG_2, BPF_REG_9));<br>
+ emit(dlp, BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, DCTX_AGG));<br>
+ emit(dlp, BPF_CALL_HELPER(BPF_FUNC_map_lookup_elem));<br>
+ emit(dlp, BPF_BRANCH_IMM(BPF_JEQ, BPF_REG_0, 0, lbl_exit));<br>
+ emit(dlp, BPF_STORE(BPF_DW, BPF_REG_9, DCTX_AGG, BPF_REG_0));<br>
+ }<br>
}<br>
<br>
void<br>
diff --git a/libdtrace/dt_impl.h b/libdtrace/dt_impl.h<br>
index 85a1e7c9..e9b949ca 100644<br>
--- a/libdtrace/dt_impl.h<br>
+++ b/libdtrace/dt_impl.h<br>
@@ -209,7 +209,6 @@ typedef struct dt_tstring {<br>
} dt_tstring_t;<br>
<br>
typedef struct dt_aggregate {<br>
- char **dtat_cpu_buf; /* per-CPU agg snapshot buffers */<br>
char *dtat_buf; /* aggregation snapshot buffer */<br>
int dtat_flags; /* aggregate flags */<br>
dt_ahash_t dtat_hash; /* aggregate hash table */<br>
-- <br>
2.34.1<br>
<br>
<br>
_______________________________________________<br>
DTrace-devel mailing list<br>
DTrace-devel@oss.oracle.com<br>
<a href="https://oss.oracle.com/mailman/listinfo/dtrace-devel">https://oss.oracle.com/mailman/listinfo/dtrace-devel</a><br>
</div>
</span></font></div>
</body>
</html>