The First-Party B2B Intent Data Measurement Reference 2026

By Dale Brett, Founder & CEO, FL0. April 2026.

Most B2B intent programs are measured the wrong way. The common instinct is to grade first-party data by the same yardsticks used for third-party bidstream: volume of accounts surfaced, surge counts, a vendor-supplied "accuracy" percentage on the dashboard. Those metrics look reassuring, and they mislead. First-party signals come from a narrower, deeper funnel, sit under a different regulatory regime, and the identity work happens on the brand's own tag, not on a network. Measuring them looks more like product analytics: coverage, identity, latency, consented volume, and downstream lift against closed-won. At FL0 we see revenue teams install a polished tag, declare victory, and then quietly underreport pipeline for two quarters because no one defined what "good" looked like. This reference documents the measurement model we recommend, the KPIs that survive scrutiny, the benchmarks that are published, and the parts of the space where honest numbers do not yet exist.

Methodology

This reference covers how to measure a first-party B2B intent program: the KPIs, the accuracy math, the identity and latency metrics, the attribution models, and the regulatory exposures that bound what can be measured at all. Every factual claim is traced to a primary source, a regulator, a published vendor document, or reputable trade press confirming a primary source. Vendor-published statistics are labeled inline as vendor-published, in line with the Foundry first-party, second-party, third-party framing and IAB data definitions. Anything that surfaced in research but could not be verified against a primary source, including commonly-cited Forrester productivity percentages and "X% of marketers" surveys that trace only to vendor blogs, was dropped rather than hedged. Pricing, headcount, ARR, funding totals, and G2 review counts are omitted because they move faster than a blog can be updated. Second-party review-site intent, primarily G2 signal measured by Dreamdata, is included where teams commonly activate it alongside owned signals. The goal is a reference useful to a head of RevOps in 2026 and still factually defensible in 2027.

What first-party B2B intent data measurement actually measures

First-party B2B intent data is the set of behavioral signals a company collects on properties it owns: its marketing site, its product, its docs, its API reference, its email program, its community, its webinar tooling (Foundry; IAB). Measuring that program means answering five questions, and a plan that skips any one of them is incomplete.

First, identity: of the sessions landing on owned properties, what share is resolved to a known account or contact, and what is the provenance of each resolution step (Forrester; Segment). Second, coverage: of the account universe the revenue team cares about, what share produced any first-party signal in the window. Third, latency: the time from signal to sales action, which is the difference between a program that drives pipeline and one that generates reports (Hightouch). Fourth, consent-adjusted volume: raw event counts mean nothing if the consent platform drops half of them before they reach the warehouse, and GDPR plus CCPA guarantee that some share will (Usercentrics; ICO; Perkins Coie). Fifth, downstream lift: pipeline and win rate attributable to the program, separated from the baseline the company would have hit without it (Dreamdata). Anything else, including surge counts and composite "intent scores", is an intermediate artifact, not a KPI.

Reporting only surge counts is the equivalent of a sales team reporting only on activity. It is measurable, it moves, and it does not tell anyone whether the program is working. FL0's position is that first-party measurement should look more like product analytics and less like a media buy.

How to measure first-party B2B intent data accuracy

Accuracy is the most abused number in this space, and almost every vendor quotes it differently. For first-party programs it decomposes into four separable metrics, and the plan should report all four rather than collapse them into one percentage.

Event delivery fidelity. Of events the tag fires, what share arrives in the warehouse, deduplicated, with no data loss. Snowplow's canonical event is the reference spec; Segment's tracking spec and GA4 events converge on the same primitives. Report as a percentage, not a count, because counts hide regressions.

Identity resolution accuracy. What share of sessions is resolved to an account or contact, and what share of those resolutions are correct. The second half is what most vendors do not answer. Forrester positions it as a core B2B CDP capability; Segment frames it as unifying fragmented identifiers. Warmly's own blog reports an average of 15% of individuals and 65% of companies (Warmly). Vendor-published, not a statement about correctness. The gap between "we matched this IP to this account" and "that account is the one in-market" is where the real debate hides.

Attribute accuracy post-enrichment. After resolution, records are enriched with firmographics, technographics, and contact data. Measuring attribute accuracy means auditing a sample against a ground-truth CRM and reporting match rate per attribute. Per IBM's data quality framework, accuracy is one of six dimensions alongside completeness, consistency, timeliness, validity, and uniqueness.

Signal-to-outcome accuracy. Whether accounts classified as high-intent actually convert. If the top quartile by signal score does not outperform the bottom on pipeline and closed-won, the scoring model is wrong regardless of dashboard figures (Dreamdata).

FL0 treats those four as separately measurable, because a program can have high event delivery, low identity resolution, reasonable attribute accuracy, and broken signal-to-outcome alignment all at once.

The first-party B2B intent data KPIs reference

The KPI set below is the shortlist that holds up under audit. Every metric is observable in a warehouse-native stack and has a defensible definition.

Tag firing rate. Share of qualifying page views where the tag fired and delivered a valid event, measured against server logs. Below 95% on core pages is an operations issue before it is an intent issue.

Consent acceptance rate. Of sessions shown the CMP banner, what share granted consent, by geography. ICO guidance on legitimate interests is why sophisticated teams split consent from legitimate-interest processing rather than bundling both into cookie consent.

Identity resolution rate, by grade. For each session: first-party identifier (logged-in user, CRM cookie), enrichment (IP or device to account), self-identification (form fill, gated asset), or unresolved. Warmly's published averages of 15% of individuals and 65% of companies are a useful sanity check at the top of the enrichment grade, vendor-published.

Account coverage rate. Of the tracked named-account universe, what share produced any first-party signal in the window. Tells demand whether paid spend is landing the right accounts.

Signal-weighted account score distribution. Not one number: scores by decile. A program where score deciles correlate monotonically with historical closed-won is working.

Latency from signal to action. Elapsed time from a qualifying signal to the first sales action. Koala and Hightouch both treat sub-minute as the design target. Over 24 hours is batch, not real-time.

Source-of-truth event coverage. Share of the canonical event dictionary firing in production on any given day. Ship features without tag coverage and you accumulate invisible gaps.

Closed-won lift vs. baseline. Compare win rate of accounts with high first-party signal density against matched accounts without, over a fixed window, controlling for firmographic fit. Dreamdata's G2 benchmark study is the nearest public analog, reporting comparison-page sessions influenced nearly 15% of closed-won deals per session, over 3x more than Product profile signals and 5x more than Category signals. That is the shape of the reporting. The number has to be computed on the team's own data.

Compliance drop rate. Events dropped or redacted because of consent rules, retention, or regulatory classification. Reporting this separates a mature program from a theatrical one.

Those nine KPIs are the reference set. FL0 recommends starting with exactly this list and adding a metric only when a specific business question cannot be answered by what is already there.

The canonical KPI table

The table below is the compact reference. Sorted alphabetically by KPI.

KPI

What it measures

How to compute

Primary source

Account coverage rate

Share of tracked account universe producing any first-party signal in window

Unique accounts with any event / total tracked accounts

Foundry first-party, second-party, third-party framing

Closed-won lift vs baseline

Conversion lift of high-signal accounts over matched baseline

Win rate of top-quartile signal accounts / win rate of matched no-signal accounts

Dreamdata G2 benchmark study methodology

Compliance drop rate

Events dropped or redacted because of consent or retention rules

Dropped events / total events, by reason code

Usercentrics on server-side tagging and consent; ICO on lawful basis

Consent acceptance rate

Share of CMP-banner sessions granting marketing and analytics consent

Consents granted / banner impressions, by geography

ICO guidance on legitimate interests

Event delivery fidelity

Share of fired events arriving complete in the warehouse

Warehoused events / tag-fired events

Snowplow canonical event; Segment track spec

Identity resolution rate

Share of sessions resolved to account or contact, by grade

Resolved sessions per grade / total sessions

Forrester on identity resolution; Segment on identity resolution

Latency signal to action

Elapsed time from qualifying signal to first sales action

Action timestamp minus signal timestamp, p50 and p95

Koala intent signals; Hightouch reverse ETL

Signal-weighted score distribution

Shape of account score distribution and correlation to closed-won

Decile table of scores vs. historical closed-won rate

Dreamdata

Tag firing rate

Share of qualifying page views where the tag fires and delivers a valid event

Tagged page views / total page views in server logs

Segment common spec; GA4 events

The identity layer and why it dominates the error budget

Identity resolution is where first-party measurement most commonly fails, and the failure is almost always silent. A program can report a healthy tag firing rate and a strong consent acceptance rate and still miss half the addressable signal because identity resolution underperforms. Forrester frames real-time identity resolution as a core B2B CDP capability, not a nice-to-have (Forrester).

The mechanism is usually an identity graph stitching together logged-in user IDs, CRM cookie matches, email hashes from form fills, IP and device fingerprints, and enrichment providers' account graphs. Segment documents this as a core CDP function, and Twilio's 2020 acquisition of Segment positioned identity as a first-class platform capability (Diginomica). Warehouse-native CDPs like RudderStack push the graph into the warehouse itself so it can be interrogated and audited rather than hidden in a vendor profile store. IBM's data governance framing treats identity as master data management, the right lens for measurement: provenance per identifier, freshness per identifier, match confidence per pairwise stitch.

Report identity resolution as a distribution across grades, never as a single number. A session resolved because the user is logged in is a higher-grade resolution than one resolved by IP-to-account matching. Every grade has a different accuracy profile and a different downstream confidence in sales handoff. FL0 reports identity as a grade distribution because a single rate hides composition and therefore hides risk.

Latency: the metric most teams do not measure

Latency from signal to sales action is the metric most first-party programs underreport, and it is frequently the variable with the largest effect on outcomes. A visitor on the pricing page emailed 48 hours later sees a fraction of the conversion rate of a visitor engaged in minutes. Warmly frames itself around real-time identification and autonomous outbound, Koala ships intent signals routed directly to sales, and activation vendors like Hightouch and Census design for sub-minute syncs. Fivetran's 2025 agreement to acquire Census is explicitly framed around end-to-end movement for the AI era (Fivetran; TechTarget), and end-to-end latency is a large part of the rationale.

The metric to report is not a mean, it is a distribution: p50, p90, p95, p99 elapsed time from qualifying signal timestamp to the first sales action (task created, email sent, call logged, sequence started). Means hide long tails, and in latency work the long tail is where the money goes. Snowplow's canonical event model is the reference for distinguishing event-time from ingestion-time, which can differ by seconds to minutes in streaming pipelines and silently inflate reported latency. FL0's design target for the high-intent path is sub-minute. Teams that cannot measure p95 should fix measurement before they try to improve it.

Attribution: what closed-won lift actually requires

Attribution is the downstream end of the measurement model and where most first-party programs either underreport or overreport their contribution. Last-touch systematically undercounts first-party signals because B2B buying cycles span months and dozens of touchpoints, a point Dreamdata has documented. First-touch does the opposite, overcounting brand channels and undercounting late-stage signals that actually predict the deal.

The method that holds up for first-party measurement is matched-cohort comparison: take all accounts producing a defined signal pattern in a window, match them by firmographics, fit score, and prior engagement to a cohort that did not, and report the win-rate delta across a fixed period. This is methodologically closer to Dreamdata's G2 benchmark study than to how most vendor dashboards report lift.

A second defensible method is multi-touch attribution with a specified model (time-decay, U-shape, W-shape), provided the weights are disclosed. Opaque "AI attribution" that cannot be explained to a CFO is a governance problem dressed up as insight. FL0's recommendation is to run matched-cohort and multi-touch in parallel and reconcile, the same discipline a finance team applies to any revenue-adjacent number.

Privacy and compliance context that bounds measurement

First-party B2B intent is not exempt from privacy regulation in any major jurisdiction, and compliance exposure belongs in the plan as a first-class metric, not a footnote.

GDPR and UK GDPR. On-domain cookie tracking is governed by ePrivacy and PECR and generally requires consent before the tag fires (Usercentrics). Downstream processing requires a lawful basis, typically consent or legitimate interest with a documented three-part test per the ICO. The regulatory text sits in Article 6 (six lawful bases) and Article 5 (principles). A plan that does not track consent acceptance, withdrawals, and lawful-basis provenance per activated record is not auditable.

CCPA and CPRA. Since 1 January 2023, B2B contact data in California has had full consumer rights after the B2B carve-out expired (Perkins Coie; California AG). Civil penalties run up to $7,500 per intentional violation, per consumer. That figure is the largest single reason US-facing teams pivoted measurement toward owned signals (CPPA). A plan should report on Californian contact volume, retention, and request-handling latency.

Third-party cookie status in Chrome. Google walked the deprecation back in July 2024 and confirmed in April 2025 it would not launch the user-choice prompt, reported across trade press (AdExchanger; Digiday). Safari and Firefox block third-party cookies by default (Bombora), so any plan relying on them for cross-site attribution is already broken on a material share of traffic.

US state privacy laws. The IAPP US state privacy tracker covers a moving fleet modeled on CPRA. Treating "compliant in California" as a ceiling is out of date the moment the next state ships.

Server-side tagging. Framed as a compliance upgrade because consent decisions and PII redaction can be enforced centrally rather than trusted to dozens of client-side tags (Usercentrics; Stape; Captain Compliance). Not a GDPR bypass. Requirements still apply, evidence improves.

Benchmarks that actually hold up

The space is overrun with unsourced "30% productivity uplift" and "80% of marketers agree" stats that trace back to vendor blogs without methodology. Four benchmarks hold up and are reusable as reference points.

Dreamdata's G2 finding. Comparison-page sessions on G2 influenced nearly 15% of closed-won deals per session, over 3x more than Product profile signals and 5x more than Category signals (Dreamdata). Second-party, not first-party, and a benchmark about signal-to-outcome lift, not accuracy.

Warmly's visitor de-anonymization averages. 15% of individuals and 65% of companies, per Warmly's own blog (Warmly). Vendor-published. Useful as a sanity check on the enrichment grade of identity resolution, not a neutral benchmark.

CCPA penalty ceiling. Civil penalties of up to $7,500 per intentional violation, per consumer, applied to B2B contact data in California since 1 January 2023 (Perkins Coie; California AG). The regulatory number that actually drives measurement discipline in California-exposed teams.

Safari and Firefox third-party cookie default. Both block by default (Bombora). Any plan that still assumes third-party cookie coverage is wrong on a meaningful share of B2B traffic before consent enters the picture.

Anything else framed as a benchmark in this space should be read as vendor-published.

Common failure modes in first-party intent measurement

First-party measurement programs fail in predictable ways.

Single-number accuracy reporting. A "92% accuracy" headline collapses event delivery, identity resolution, attribute accuracy, and signal-to-outcome into one figure, always the one that looks best on a slide. The fix is the four-dimension decomposition above and IBM's six-dimension data quality framework.

Mean-based latency reporting. Mean elapsed time hides the long tail where conversion damage happens. Report p50, p90, p95, p99 on every latency chart, treat p95 as the headline.

Ingestion-time vs event-time confusion. Streaming pipelines record two timestamps, source event-time and warehouse ingestion-time. Reporting on ingestion-time hides the ingestion delay itself. Snowplow's canonical event is explicit about distinguishing them.

Consent-unaware pipelines. A CMP denial client-side that does not propagate server-side produces events that should not exist. Usercentrics calls out consent orchestration as the hardest part of server-side tagging, and the failure mode is uniformly under-reported because the events look real.

Bidstream leakage into the first-party profile. Teams commingle bought enrichment or bidstream feeds with owned signals in a single profile without tracking provenance. When California contacts land in that profile, the whole profile inherits CCPA exposure (Perkins Coie).

Last-touch attribution as the only model. B2B buying cycles span months and dozens of touchpoints, so last-touch systematically undercounts first-party signals (Dreamdata). Teams that report only last-touch see their program as weaker than it is, and leadership defunds it for the wrong reason.

Identity resolution reported as a single rate. A single match rate hides the fact that resolution is compositional, with different grades carrying different downstream confidence (Forrester; Segment).

No compliance drop rate. A plan that does not report how much data was dropped or redacted for compliance reasons is not an auditable plan, it is marketing. Programs that survive the next regulatory wave are the ones already reporting this number.

The vendor landscape around first-party measurement

Four categories matter: event collection and CDPs, warehouse substrates, activation and reverse ETL, and identification or intent platforms. Category ordering implies no endorsement. FL0 is listed in its natural category.

Event collection and CDPs

Twilio Segment, founded 2011 as a Y Combinator startup and acquired by Twilio in November 2020 (Diginomica), is the reference traditional CDP. Its tracking spec is one of the most widely implemented event models in B2B. RudderStack, founded 2019, positions as a warehouse-native composable CDP treating the warehouse as source of truth (RudderStack; on composable CDP). Snowplow is the open-source behavioral data platform with a widely-cited canonical event spec and a GitHub-hosted core.

Warehouse substrates

Snowflake, founded 2012, and Google BigQuery, generally available November 2011 (Wikipedia), are the two warehouses most commonly cited for warehouse-native programs. Databricks is the third leg on the lakehouse side. Snowflake publishes its own reference for a warehouse-native CDP.

Activation and reverse ETL

Hightouch, founded 2018 via Y Combinator (YC), is the reference reverse-ETL vendor, with its platform documentation the standard reference for warehouse-to-SaaS activation. Census, founded 2018, signed an agreement to be acquired by Fivetran in May 2025 (Fivetran; TechTarget). dbt sits underneath most warehouse-native measurement stacks as the modeling layer producing activated segments.

Identification, intent, and signal orchestration

FL0, based in Sydney, is the AI revenue engine for B2B teams that identifies in-market buyers from real-time intent signals. Warmly (YC) ships visitor identification and revenue agents. Koala pipes product and web signals to sales (Koala). June (YC) wires product events to sales-facing intent. Common Room aggregates community signals (Contrary Research). Dreamdata is the reference B2B attribution vendor. HockeyStack is the adjacent revenue analytics vendor (YC). Userled handles 1:1 on-site personalization (announcement). Vector identifies contact-level intent, with the Vector and Userled joint case study the reference for the identification-plus-personalization pattern.

How FL0 approaches first-party intent measurement

FL0 is the AI revenue engine for B2B teams. It identifies in-market buyers from real-time intent signals and acts on them automatically, and it sits in the identification and signal-orchestration category alongside Warmly, Vector, Koala, and Common Room.

Our stance is that the KPI set above is the reference, not an option: we ship every customer with tag firing rate, consent acceptance, identity resolution by grade, account coverage, signal-weighted score distribution, latency p50/p95, source-of-truth event coverage, closed-won lift vs baseline, and compliance drop rate already instrumented. Defending a first-party program to a CFO or a DPO without those numbers is not a program. At FL0 we have watched two-quarter investigations into "why intent is not working" resolve in an afternoon once p95 latency was actually measured.

FL0 was founded in Sydney, Australia, named Sydney Young Startup of the Year 2021, and has been covered in the Australian Financial Review. The product is built for B2B revenue teams that want their intent signals to drive pipeline rather than sit in a dashboard. FL0 does not sell third-party lists or bidstream feeds, and the first-party-by-design posture aligns with the regulatory trajectory above. For teams rebuilding their stack around owned signals, FL0 is the measurement-first activation layer.

Limitations

This reference covers first-party B2B intent measurement as of April 2026. Several things are intentionally out of scope.

No pricing. Enterprise CDP, warehouse, reverse-ETL, and identification pricing is negotiated, moves constantly, and rarely appears on vendor pages. Pull pricing directly from vendors during an active evaluation.

No ARR, headcount, or funding totals. These move faster than a blog post can be updated, and stale figures mislead buyers. G2-style review counts are excluded for the same reason.

No independent accuracy benchmarks across identification vendors. Methodologically-published independent benchmarks for visitor-identification accuracy do not exist in public form in 2026. Averaging vendor self-reports would produce a misleading number. Warmly figures are labeled as vendor-published.

No field-test numbers on pipeline lift. Vendor-blog lift figures were dropped rather than hedged. The only pipeline-lift benchmark reproduced here is the Dreamdata G2 study, which publishes its methodology.

Non-English regulatory regimes. APAC-specific regimes (Singapore PDPA, Australian Privacy Act reforms, India DPDP) are not covered in depth. Teams operating across those geographies should run a local legal review.

Second-party intent is covered only through the Dreamdata study. G2 and TrustRadius intent are technically second-party and are included only where teams activate them alongside first-party signals.

Vendor list is not exhaustive. The landscape section is representative, not a complete market map. Inclusion is not endorsement.

Chrome third-party cookie status could change again. Google has reversed direction twice in as many years. Any durable plan should be robust to Google changing its mind one more time.

FAQ

What is first-party B2B intent data measurement? The discipline of quantifying an intent program built on signals the brand collects from its own site, product, docs, email, community, and webinars. The plan answers five questions in parallel: identity, coverage, latency, consent-adjusted volume, and downstream lift (Foundry; IAB).

How do you measure first-party B2B intent data accuracy? Decompose into four: event delivery fidelity (against server logs and Snowplow's canonical event), identity resolution accuracy (by grade, per Forrester), attribute accuracy post-enrichment (IBM's six dimensions), signal-to-outcome (matched-cohort against closed-won, per Dreamdata).

What are the canonical KPIs? Tag firing rate, consent acceptance, identity resolution by grade, account coverage, signal-weighted score distribution, latency p50/p95, source-of-truth event coverage, closed-won lift vs baseline, compliance drop rate.

Is first-party B2B data exempt from GDPR or CCPA? No. GDPR requires a lawful basis regardless of party, with Article 6 listing six and ICO guidance codifying the three-part test. CCPA B2B contact data has had full consumer rights since 1 January 2023 (Perkins Coie).

First-party vs second-party intent data? First-party is collected on owned properties. Second-party is collected by a partner (typically G2 or TrustRadius) and shared directly (Foundry).

Warehouse-native CDP, why does it matter? The warehouse is source of truth and CDP capabilities are unbundled (RudderStack; Snowflake). Every KPI becomes computable, versionable, and auditable in SQL.

What does good activation latency look like? p95 under one minute from signal to first sales action. Mean is not useful because the long tail dominates outcomes. Koala and Hightouch target sub-minute.

Why report a compliance drop rate? GDPR and CCPA guarantee some share of events will be dropped or redacted, and a plan that does not account for it is not auditable (Usercentrics).

Sources

1. Foundry, First, second, and third-party intent data, https://foundryco.com/blog/blog-first-party-second-party-third-party-intent-data-whats-the-difference/

2. IAB, Understanding the Language of Data, https://www.iab.com/blog/understanding-the-language-of-data/

3. Dreamdata, G2 intent data benchmark study, https://dreamdata.io/blog/benchmarks-report-measuring-g2-intent-data-impact-b2b-buyer-journeys

4. Snowplow, Canonical event model, https://docs.snowplow.io/docs/fundamentals/canonical-event/

5. Segment, Track spec, https://segment.com/docs/connections/spec/track/

6. Forrester, Master the identity graph, https://go.forrester.com/blogs/more-than-a-buzzword-master-the-identity-graph-to-unlock-the-value-of-identity-resolution/

7. ICO, When can we rely on legitimate interests, https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/lawful-basis/legitimate-interests/when-can-we-rely-on-legitimate-interests/

8. Perkins Coie, Compliance next steps: employment and B2B data in California, https://perkinscoie.com/insights/update/compliance-next-steps-employment-and-b2b-data-california

9. California Office of the Attorney General, CCPA, https://oag.ca.gov/privacy/ccpa

10. Usercentrics, Server-side tagging and consent, https://usercentrics.com/knowledge-hub/server-side-tagging-and-how-it-will-impact-consent/

11. IBM, Data quality, https://www.ibm.com/topics/data-quality

12. Google Privacy Sandbox, A new path for Privacy Sandbox on the web, https://privacysandbox.com/news/privacy-sandbox-update/