All platform POCs
POC 03Reproducible now

Ads Attribution Reconciliation

Shapley-value and Markov-chain attribution — bit-identical across every platform, with an optional deterministic differential-privacy noise layer.

32/32
Receipt hashes identical (cross-platform)
6.1M
Real Criteo journeys tested
199/675
Float64 campaigns that drift (same machine)
ε-DP
Deterministic privacy overlay

The scenario

Set the picture

A top-200 advertiser looks at three measurement paths: Meta’s in-platform reporting, the advertiser’s Conversions API, and Marketing Mix Modeling. All three answer “how much credit does this ad get?” with different numbers. The deltas absorb thousands of account-team hours.

Not all disagreement is numerical. Much is methodological. SolvNum doesn’t fix methodology differences. It fixes the arithmetic substrate: given the same attribution method on the same data, the numbers still drift across platforms because of libm disagreements and reduction-order dependence.

Cost today

DMA Article 5(9) requires advertisers to independently verify ad performance. DSA Article 40 requires researcher reproducibility. Both imply that given the same inputs, the reporting should reproduce. Today it does not.

Shapley attribution involves summing over permutations. Markov attribution involves repeated matrix-vector products. Both accumulate arithmetic drift across thousands of chains per user journey.

What changes with SolvNum

On the public Criteo Attribution dataset (6,142,256 journeys, 675 campaigns): SolvNum-backed Shapley and Markov produce bit-identical receipts across Windows and Linux. 32/32 receipt hashes identical in every cross-platform comparison.

Float64_naive Markov drifts on the same machine under mv_chunked proxy (BLAS-swap simulation): 199/675 campaigns disagree, max absolute Δ = 2×10⁻¹². Small per-campaign but every disagreeing bit propagates into the receipt hash.

Optional deterministic-DP overlay: Gaussian noise generated from a committed cryptographic key. SolvNum’s quantization absorbs cross-platform math-library divergence in the noise draw. Privacy review charter documented in full.

Measurable outcome

What we claim — and how it survives review

Each line below maps to a captured number in the demo section. Every number is reproducible from the benchmark suite.

  • 32/32 receipt hashes identical across Windows-x64 and Linux-x64 (synth pilot + real Criteo).
  • 32/32 receipt hashes identical across WSL host and dedicated Linux server (real Criteo, 6.1M journeys).
  • Float64 drift surface: markov × float64_naive × mv_chunked = 199/675 campaigns differ on same machine.
  • Real cross-platform math-library divergence captured and absorbed by SolvNum quantization (noise draw in DP overlay).
  • Three-sided handshake demonstrated: platform issues, creator verifies, regulator re-derives from published artifact.

The demo

What was tested. How. What the script printed.

32-receipt grid: 2 methods (Shapley, Markov) × 4 implementations (float64_naive, float64_fixed_order, solvnum_backed, solvnum_dp) × 4 platform-shape proxies (ref, shuffle_seed_2026, shuffle_seed_4242, mv_chunked).

Run on synth pilot (1,500 journeys × 10 campaigns) and full real Criteo (6,142,256 journeys × 675 campaigns). Cross-platform verification via verify_xplatform.py comparing receipt snapshots from two hosts.

Captured benchmark output

The numbers the script actually printed.

Attribution receipt grid — real Criteo (6.1M journeys, 675 campaigns)
MethodImplementationrefshuffle_2026shuffle_4242mv_chunked
shapleyfloat64_naive
shapleysolvnum_backed
shapleysolvnum_dp
markovfloat64_naive✗ DRIFT
markovsolvnum_backed
markovsolvnum_dp

The single red cell (markov × float64_naive × mv_chunked) is the first in-tree drift on real data. SolvNum holds across every axis.

Evidence pointers

Where the claims live in the repo

These are the files a reviewer should run to re-derive every number on this page.

  • tools/solvnum/buyer_pocs/attribution_measurement/bench.py
  • tools/solvnum/buyer_pocs/attribution_measurement/verify_xplatform.py
  • tools/solvnum/buyer_pocs/reports/receipts/_attribution_criteo_full/
  • tools/solvnum/buyer_pocs/reports/receipts/_attribution_criteo_linux/
  • docs/poc/03_attribution_reconciliation.md
  • docs/poc/03_attribution_reconciliation_xplat_evidence.md

Want to see these receipts on your pipeline?

Run the benchmark against your actual decision pipeline.

Two weeks, $25K, fully credited. No production integration, no data leaving your premises. Every claim above traces back to a script you can run locally.

Talk to us