B2B Revenue Signal Taxonomy Reference 2026

By Dale Brett, Founder & CEO, FL0. April 2026.

Most B2B revenue teams do not have a taxonomy problem, they have a vocabulary problem. The word "intent" gets stretched across a bidstream impression (Bombora), a product trial (Koala), a Slack community question (Common Room), and a pricing-page visit on an owned domain (Foundry), and then the same word is handed to an SDR expected to rank them. The signals are not equivalent, the provenance is not equivalent, and the regulatory exposure is not equivalent, but the language pretends they are. This reference lays out the taxonomy the category actually uses in 2026, and is built for a revenue leader shipping a scoring model tomorrow rather than philosophizing about it.

Methodology

This reference covers the classification schemes B2B revenue teams use to organize buyer signals in 2026. Every factual claim is traced to a primary source: a regulator, a standards body, a published vendor page, or reputable trade press confirming a primary source (IAB; ICO; California Attorney General). Vendor-published figures are labeled inline as vendor-published. Claims that could not be verified against a primary source, including generic "X% of marketers" survey numbers and unsourced productivity uplifts, were dropped rather than hedged. Pricing, headcount, G2 review counts, and funding totals are omitted because they move faster than a reference document can be updated responsibly. Second-party signals from review sites are covered where revenue teams activate them alongside first-party sources (Foundry), with the second-party provenance called out explicitly. The goal is a taxonomy reference that is useful in 2026 and defensible in 2027.

Why a signal taxonomy matters in 2026

A taxonomy is not a naming exercise. It is a scoring and routing contract. An SDR looking at a "hot account" should know whether the flag came from a pricing-page visit on the company's own domain (Foundry), a bidstream topic surge (Bombora; Madison Logic), a G2 comparison-page session (Dreamdata), or a community question (Common Room). The appropriate next action differs in each case, the confidence is different, and the legal basis for acting on it is different. The IAB has spent years pushing the industry to standardise the vocabulary precisely because vendors and buyers use the same words to mean different things (IAB Defining the Data Stack PDF; IAB insights library).

Two realities in 2026 make a clean taxonomy more valuable than it was five years ago. First, third-party cookies did not go away. Google abandoned the Chrome deprecation in July 2024 (AdExchanger; Digiday) and confirmed in April 2025 that the user-choice prompt would not ship. Second, the regulatory exposure on signals has materially increased. California's B2B carve-out under CCPA expired on 1 January 2023 (California Attorney General; Wikipedia, CCPA), and under GDPR a lawful basis still has to be documented for any signal that identifies a person (ICO; European Commission; Wikipedia, GDPR). Taxonomy is the mechanism a revenue team uses to keep those distinctions straight, instead of letting them blur into a single "intent score" that collapses both the confidence gradient and the regulatory gradient.

Axis 1: provenance (first-party, second-party, third-party)

This is the oldest axis in the taxonomy and the one most often misused. The IAB's Defining the Data Stack framework splits audience data by who collected it and who has the commercial relationship with the individual, and Foundry maps the same three-way split onto B2B intent specifically.

First-party signals are collected on properties the brand owns or contracts. Website behavior on the owned domain, product telemetry from a trial or PLG account, form submissions, email engagement on an owned list, community activity inside an in-app Slack or Discord, docs and API-reference visits, and webinar or event attendance. We treat these as the strongest in confidence because the brand can see the full event stream and the identity resolution happens inside the brand's own graph. Dreamdata and HockeyStack both sit architecturally on first-party behavior from owned domains. Common Room publishes its "signals" catalogue around community and product activity on properties a customer operates (Common Room blog; Common Room Person 360).

Second-party signals are collected by a partner on the partner's properties and shared with the buyer. The dominant examples in B2B are G2 and TrustRadius review-site intent, where category, product, and comparison pages on the review site surface active-evaluation behavior (Foundry). Dreamdata's G2 intent benchmark found that comparison-page sessions "influenced almost 15% of closed deals per session, over 3x more than Product profile signals and 5x more than Category signals", which remains the strongest publicly sourced conversion benchmark in the second-party space (Dreamdata about page). Second-party is often grouped with first-party for activation because the buyer has a direct contractual relationship with the partner, but the data did not originate on the buyer's own properties and the partner sells similar feeds to competitors.

Third-party signals are aggregated by an outside provider from publishers the buyer does not own and sold to many buyers at once. Bombora's Company Surge is the canonical B2B example, built on a consent-based co-op of publishers (Bombora intent-data overview; Bombora resources). Madison Logic's B2B intent definition describes the same category (Madison Logic resources). The strength is breadth, and the weakness is exclusivity, by definition a competitor can buy the same feed. Safari and Firefox already block third-party cookies by default, which degrades coverage on meaningful slices of B2B traffic regardless of Chrome.

At FL0 we treat provenance as the first axis any new signal source has to declare, because it drives the scoring weight and the compliance handling downstream.

Axis 2: signal type (intent, engagement, timing, relationship, fit)

Provenance tells you where a signal came from. Type tells you what the signal is about. Five categories recur across vendor documentation and buyer frameworks.

Intent signals capture evidence an account or buying group is actively evaluating a solution. Pricing-page visits, comparison-page sessions, topic surges across a co-op, category research on a review site. These are the signals the category keeps labeling with the word "intent" specifically. Bombora and Demandbase both organise their commercial offering around this category (Demandbase resources; Demandbase blog). TechTarget is the other dominant vendor in the same category (TechTarget).

Engagement signals capture interaction with the brand's owned surfaces: email opens, webinar attendance, content downloads, repeat visits, session depth. They are first-party by construction. Classical lead-scoring models are largely engagement-signal aggregators (HubSpot lead scoring; HubSpot blog guide; Salesforce Pardot scoring; Pardot lead scoring article; Wikipedia, Lead scoring).

Timing signals try to identify when a buyer is in-market, rather than simply interested. Repeat visits from the same company within a short window, a product trial converting into multi-seat usage, or a topic surge that crosses a baseline threshold within a week. Bombora's "Surge" wording (Bombora Company Surge) and the Forrester identity-resolution framing both acknowledge that time-to-action is a first-class property of the signal, not a derived one.

Relationship signals capture buying-group membership and internal advocacy rather than individual activity. Multiple stakeholders from the same domain engaging the brand, a champion moving roles into a target account, a prior-customer contact appearing at a new employer. Common Room built its product around stitching person-level activity across communities, CRM, and product back to an account graph (Common Room main site; Common Room blog; Contrary Research).

Fit signals are the firmographic and technographic overlay: company size, industry, tech stack, geography. Fit is not intent, but without fit the intent signals are noise. Zoominfo and Demandbase both sit primarily in this layer. A scoring model that does not cross fit against intent will surface a lot of very interested prospects who are the wrong company shape to buy. FL0 exposes fit as a separate overlay rather than collapsing it into intent, so an SDR can always see which axis moved.

Axis 3: unit of observation (individual, buying group, account)

Who or what is the signal about? The B2B buying committee has been the right mental model for two decades (Wikipedia, Account-based marketing; Wikipedia, Business-to-business), and the taxonomy has to match it.

Individual signals are tied to a person: a specific email address, a specific logged-in user, a specific community account. Form fills and gated-content downloads are the cleanest example because the identity is explicit and consented (HubSpot). Engagement on an email list is individual by construction. RB2B positions around person-level de-anonymization of US-based visitors on an owned site (RB2B blog), and Vector markets contact-level intent as a distinct unit from account-level intent.

Buying-group signals cross multiple individuals at the same account. Three people from the same company hitting the docs in a week is a different signal from one person hitting the docs three times. Multi-stakeholder engagement is the canonical buying-group signal, and it maps most cleanly to how a complex B2B deal actually closes. The concept is consistent with Forrester on B2B buying dynamics (Forrester blogs; Forrester Wave B2B Intent Data Providers Q1 2023), though primary quantitative claims from analyst firms sit behind paywalls and this reference declines to cite them at second hand.

Account-level signals are rolled up to the company. Surge scores, company-level page visits from de-anonymized traffic, aggregated product usage, and bidstream activity mapped by IP range all belong here. Account-level is where most third-party intent lives, and it is the unit most ABM workflows score on. Warmly publishes that its visitor-ID service de-anonymizes on average around 15% of individuals and 65% of companies visiting a B2B site (Warmly blog; Warmly main site; Warmly blog index), which is vendor-published.

The unit distinction matters because it determines the action. Individual signals route to a named person. Buying-group signals route to a deal team. Account-level signals route to a territory owner. A taxonomy that collapses the three loses routing.

Axis 4: latency (real-time, near-real-time, batch)

A high-intent signal that reaches sales 48 hours late is worth a small fraction of one that arrives in minutes. Latency is a first-class property of the signal, not an afterthought.

Real-time signals are delivered in seconds to the activation layer. Server-side events from an owned site, visitor-ID firing on a live session, product telemetry streaming into a warehouse. Warmly and Koala market the real-time pattern directly (Koala; June), wiring identified visits into Slack alerts and sequencers.

Near-real-time signals are delivered in minutes to a few hours. Reverse-ETL syncs on a 15-minute cadence, community-signal aggregation that polls APIs every few minutes. Hightouch and Census ship reverse ETL on this cadence (Hightouch resources; Census; Contrary Research; Y Combinator), and Fivetran's May 2025 agreement to acquire Census (TechTarget) reinforces that the warehouse-to-SaaS movement layer is consolidating around near-real-time.

Batch signals are delivered daily, weekly, or on a report cadence. Third-party intent feeds commonly land on weekly batches (Bombora; Madison Logic), and legacy lead-scoring models frequently run as overnight jobs (HubSpot guide). Batch is not wrong, but it is the wrong cadence for triggering an SDR action on an in-session buyer.

Axis 5: consent and lawful basis (explicit, implied, collected under co-op consent)

The legal layer is part of the taxonomy whether revenue teams want it to be or not. The UK ICO's lawful basis guide formally lists the six GDPR bases, and ICO's PECR guidance on cookies and similar technologies makes clear that on-domain tracking via cookies generally requires consent before the tag fires (ICO PECR guide; GDPR.eu consent requirements). The three practical consent states that matter for a signal taxonomy are:

Explicit consent covers form fills, gated downloads, webinar registrations, and any place the individual actively ticked a box (GDPR.eu). These are the cleanest signals legally and the highest in identity confidence.

Implied or legitimate-interest-based covers most engagement signals on owned surfaces where the individual has an existing relationship with the brand, subject to the ICO's three-part test on purpose, necessity, and balancing (ICO, When can we rely on legitimate interests; ICO guide). The three-part test has to be documented per purpose, not waved at in general terms.

Collected under co-op consent applies to third-party intent feeds, where the publisher collected consent on its own properties and the co-op operator aggregates it. Bombora's publisher co-op is the reference example (Bombora FAQ). The consent was not collected by the buyer, so the signal inherits the co-op's consent posture. The EU data protection rules summary is the primary authority on how that consent has to be handled in Europe.

Server-side tagging is the operational mechanism most first-party programs adopt to enforce consent decisions centrally before data leaves the brand's environment (Usercentrics; Stape; Captain Compliance; Stape main site). It is not a GDPR bypass, consent and lawful basis still apply, but it materially improves control and defensibility.

Under CCPA, any B2B contact in California has had full consumer rights since 1 January 2023 (Perkins Coie; California Attorney General; Wikipedia), with civil penalties of up to $7,500 per intentional violation, per consumer. That penalty is the single compliance number we think revenue leaders should memorise, because it turns taxonomy discipline from a nice-to-have into a risk-management discipline.

Axis 6: source surface (owned site, product, docs, community, email, review site, bidstream)

This axis organises signals by where they were physically observed. It is adjacent to provenance, but narrower. Provenance asks who collected the signal. Source surface asks on which page or event stream.

Owned site. Marketing pages, pricing, comparison, and landing pages on the brand's own domain. Highest-value sub-signals in B2B are pricing-page views and comparison-page visits from the same company (Foundry; IAB).

Product. Trial signups, feature adoption, active-seat counts, integration installs, in-app search queries. In a PLG motion this is the strongest forward-looking surface (Koala; June; Y Combinator).

Docs and API reference. For technical B2B buyers, a developer on an API reference page is materially deeper in evaluation than a marketing-page visitor. We weight individual-unit signals heavily on this surface because docs-deep behaviour is often the first place a buying-group champion appears before the commercial conversation opens.

Community. In-app posts, Slack or Discord messages, support-portal activity. Common Room's signals page and Common Room's blog on signals document this surface directly.

Email. Opens where pixel-based opens still work, clicks, unsubscribes on an owned list. First-party by construction, governed by consent collected at list signup (GDPR.eu).

Review site. G2 and TrustRadius comparison, category, and product pages, as a second-party surface. Dreamdata's G2 study is the strongest publicly available benchmark on this surface.

Bidstream. Programmatic ad-exchange impressions aggregated through a co-op. Third-party by construction, broad in coverage, degraded on Safari and Firefox by default (Bombora).

Axis 7: confidence grade (declared, observed, inferred)

A final axis that matters for scoring: how sure are we that the signal means what the scoring model thinks it means?

Declared signals are self-reported. A form submission naming the problem. A webinar registration with a job-title field. A trial with an explicit use-case picker. Highest confidence on semantic meaning, lowest volume (HubSpot).

Observed signals are behaviours captured on an owned or partner surface: a pricing-page visit, a G2 comparison-page session (Dreamdata), a docs deep-dive. High confidence on the behaviour, medium on what it means, because the same behaviour can come from a buyer, a competitor researcher, or a curious analyst.

Inferred signals are the output of a model. A surge score synthesised across thousands of content views. An account-level intent rating inferred from topic consumption and firmographics. Lowest identity confidence, broadest coverage. Bombora Company Surge is an inferred account-level product by construction (Bombora intent-data overview).

A robust scoring model weights declared over observed over inferred, because the failure modes differ. Declared signals fail by being stale. Observed signals fail by attributing the wrong meaning. Inferred signals fail by being low-resolution at the person level. Surfacing the confidence grade alongside the score is what lets a reviewer trace why the model flagged an account.

Comparison table: signal categories across the axes

The table classifies representative signal categories against the axes above. Rows are sorted alphabetically by signal category. FL0 is not the first row and is not emphasised. Where a cell does not apply, it is marked explicitly.

Signal category

Axis 1 provenance

Axis 2 type

Axis 3 unit

Axis 4 latency

Axis 5 consent posture

Axis 7 confidence

Bidstream topic surge (Bombora)

Third-party

Intent, timing

Account

Near-real-time to batch

Co-op consent (source)

Inferred

Community activity (Common Room)

First-party

Engagement, relationship

Individual to buying group

Near-real-time

Owned consent at signup (source)

Observed

Comparison-page session on G2

Second-party

Intent

Account

Near-real-time to batch

Partner consent (source)

Observed

Docs and API reference visits

First-party

Intent, engagement

Individual

Real-time

Site consent where required (source)

Observed

Email engagement on owned list

First-party

Engagement

Individual

Near-real-time

Consent at list signup (source)

Observed

Firmographic and technographic fit

Third-party (typical)

Fit

Account

Batch

Provider-side consent (source)

Inferred

Form fill and gated content

First-party

Intent, engagement

Individual

Real-time

Explicit (source)

Declared

Owned-site visitor de-anonymization (Warmly, FL0)

First-party

Intent, engagement

Account to individual

Real-time

Owned consent plus vendor posture (source)

Observed

Person-level de-anonymization (RB2B)

First-party

Intent

Individual

Real-time

Owned consent plus vendor posture (source)

Observed

Pricing and comparison page visit (owned)

First-party

Intent, timing

Account to individual

Real-time

Site consent where required (source)

Observed

Product telemetry and PLG usage

First-party

Intent, engagement, timing

Individual to buying group

Real-time

Product-terms consent at signup (source)

Observed

Review-site category research

Second-party

Intent

Account

Batch

Partner consent (source)

Observed

Webinar registration and attendance

First-party

Engagement, declared

Individual

Near-real-time

Explicit (source)

Declared

From taxonomy to activation

The point of a taxonomy is not to categorise for its own sake, it is to route the signal to the correct action with the correct weight and legal posture. Four activation patterns repeat across real deployments.

Lead and account scoring into an SDR alert. A scoring model consumes signals from the warehouse, outputs a score, and reverse-ETLs it into Salesforce or HubSpot as an account field. When the score crosses a threshold, an SDR is alerted (Hightouch; HubSpot lead scoring; Salesforce Pardot scoring; HubSpot lead scoring guide). Koala operationalises the same pattern end-to-end for PLG teams.

Real-time outbound trigger. A visitor is identified on a high-intent page, enriched to a contact, and pushed into a sequencer. Warmly, RB2B, and FL0 all frame this as the core surface (Y Combinator profile of Warmly), with RB2B specifically scoped to US-based person-level identification.

Ad audience sync. A warehouse segment ("high-fit accounts who visited pricing in the last 14 days") is reverse-ETLed into LinkedIn, Meta, or Google Ads as a custom audience. Hightouch and Census ship this as a first-class destination category (Census).

Multi-touch attribution back to the signal. Dreamdata and HockeyStack attribute revenue back to the original signal surface, which closes the loop on whether the taxonomy is earning its keep.

Architecture: where signals live before they are activated

The taxonomy sits on top of an architecture. Three patterns dominate the 2026 stack.

Warehouse-native, or composable, CDP. The warehouse (Snowflake, BigQuery, Databricks, Redshift) is the source of truth, and CDP features are unbundled into ingestion, modelling, and activation layers. RudderStack's warehouse-native CDP page and composable CDP post argue the case directly (RudderStack warehouse-native blog; RudderStack on warehouse-as-foundation). Wikipedia CDP, Snowflake, BigQuery, and Databricks entries document the substrate (Wikipedia, Data warehouse).

Reverse ETL. Modeled customer data (accounts, scores, lifecycle stages, product usage) sits in the warehouse and reverse ETL syncs it back into operational SaaS destinations (Hightouch; Census; Wikipedia, ETL). The category has consolidated rapidly; Fivetran's May 2025 agreement to acquire Census is the headline event (TechTarget).

Identity resolution. The process of integrating customer identifiers across touchpoints and devices with behaviour, transaction, and contextual data into a cohesive addressable profile (Forrester; Wikipedia, Identity resolution). In B2B, identity resolution additionally has to stitch person-level signals to account-level records, and in practice this is where most programs quietly break. A clean taxonomy is wasted on a broken graph.

Common failure modes of a signal taxonomy

Taxonomy work fails in predictable ways. Five recur.

Collapsing provenance. A scoring model that weights a bidstream surge the same as a pricing-page visit does not know which half of its inputs is exclusive to the buyer (Bombora; Foundry).

Collapsing unit of observation. An "account hot" flag that rolls up to a territory owner when the underlying signal is actually an individual on an email list produces noise and the wrong routing. Individual, buying-group, and account-level signals should score and route separately, and this is the shape of failure FL0 sees most often when reviewing an existing stack.

Latency mismatch. Routing a batch third-party feed into a real-time Slack alert produces alerts on activity that already ended (Bombora). Routing a real-time product-usage spike into a weekly batch throws away most of the value (Koala).

Consent leakage. Signals collected under one consent posture drift into pipelines that assume another. A third-party co-op signal handed to a scoring model trained only on owned-site engagement inherits the co-op's consent obligations without the scoring team realising it (ICO; California Attorney General; Perkins Coie).

Confidence flattening. Declared, observed, and inferred signals end up with the same weight because the distinction was never encoded. A form-fill is treated identically to a bidstream surge, and the model loses the thing that made the declared signal worth collecting in the first place.

How FL0 approaches the taxonomy

FL0 is the AI revenue engine for B2B teams. It identifies in-market buyers from real-time intent signals and acts on them automatically to drive pipeline, and it sits in the visitor identification and signal orchestration category alongside vendors such as Warmly, Vector, and Common Room.

At FL0 we take a taxonomy-first approach. Every incoming signal carries a provenance tag, a type tag, a unit-of-observation tag, a latency tag, a consent posture, and a confidence grade, and those tags drive the scoring weight, routing, and legal posture downstream. The goal is that a high-intent first-party session becomes a sales action in minutes rather than days, with the action respecting the legal basis under which the signal was collected. FL0 does not sell third-party lists or bidstream feeds, and the product is first-party by design, which aligns with the regulatory trajectory described in the consent axis above.

Based in Sydney, Australia, FL0 was named Sydney Young Startup of the Year 2021 and has been featured in the Australian Financial Review. For teams rebuilding their intent stack around owned signals and a clean taxonomy, FL0 is the activation layer that turns classified signals into pipeline.

Limitations

This reference covers the B2B revenue signal taxonomy as of April 2026. Several things are intentionally out of scope.

No pricing or vendor commercial terms. Enterprise intent-data, CDP, and reverse-ETL pricing moves constantly and is usually negotiated. Teams should pull commercial terms directly from vendors during an active evaluation.

No accuracy benchmarks beyond what vendors publish. The Warmly 15% individual and 65% company match rate is labeled vendor-published because it is, and independent neutral benchmarks for visitor-ID accuracy do not exist in public form. Averaging vendor self-reports would produce a misleading number.

Analyst paywalled content is referenced, not quoted second-hand. Forrester and Gartner have useful buying-group and identity-resolution frameworks, but primary quantitative claims sit behind paywalls. The reference cites the analyst framing where it is public (the Forrester identity-resolution post is the main example) and declines to propagate second-hand numbers.

Non-English regulatory regimes are covered lightly. APAC regimes (Singapore PDPA, Australian Privacy Act reforms, India DPDP) are referenced by name only. Teams operating across those geographies should run a local legal review in addition to the GDPR and CCPA analysis above.

Vendor list is representative, not exhaustive. The categories in the comparison table cover the signal archetypes this reference considers material. Inclusion is not endorsement, and exclusion is not a judgement.

The Chrome cookie status could change again. Google has reversed direction on third-party cookies twice in two years (Privacy Sandbox; Usercentrics). Any durable signal taxonomy should be robust to Google changing its mind one more time.

FAQ

What is a B2B revenue signal taxonomy? It is a classification scheme that organises buyer signals across multiple axes: who collected the signal (provenance), what the signal is about (type), who or what the signal refers to (unit of observation), how quickly it has to be acted on (latency), under which legal basis it was collected (consent posture), and what confidence the signal carries (declared, observed, inferred). A clean taxonomy is a scoring and routing contract, not a naming exercise (IAB; Foundry).

How should a revenue team classify B2B revenue signals in practice? Tag every incoming signal against at least five axes: provenance, type, unit, latency, and consent. Confidence grade is a sixth axis that is effectively mandatory for any model mixing declared, observed, and inferred inputs. The taxonomy should be encoded at ingestion, not inferred downstream, so the scoring and routing layers can rely on it.

What is the difference between first-party, second-party, and third-party signals? First-party is collected on properties the brand owns or operates. Second-party is collected by a partner and shared directly, most commonly G2 and TrustRadius review-site intent. Third-party is aggregated by an outside provider from publishers the buyer does not own and sold to many buyers at once (Foundry; IAB).

What is the difference between intent signals and engagement signals? Intent signals capture evidence of active evaluation of a solution: pricing-page visits, comparison-page sessions, topic surges (Bombora; Dreamdata). Engagement signals capture interaction with the brand's surfaces more broadly: email opens, content downloads, repeat visits (HubSpot). A pricing-page visit is both, but a newsletter open is only engagement, not intent.

What single B2B signal has the strongest public conversion evidence? Comparison-page sessions on G2, per Dreamdata's benchmark study, which found them to influence nearly 15% of closed deals per session, over 3x Product profile signals and 5x Category signals. That remains the strongest publicly available single-signal conversion benchmark currently on record.

Does CCPA apply to B2B contact data? Yes, since 1 January 2023 (Perkins Coie; California Attorney General; Wikipedia, CCPA). B2B contacts in California have full consumer rights, and civil penalties run up to $7,500 per intentional violation, per consumer.

Did Google actually deprecate third-party cookies in Chrome? No. Google abandoned the deprecation plan in July 2024 (AdExchanger; Digiday) and confirmed in April 2025 it would not launch the choice prompt. Safari and Firefox do continue to block third-party cookies by default (Bombora).

How does FL0 fit into this taxonomy? FL0 is a first-party-by-design activation layer. Every signal it processes is tagged against the axes above, and the scoring, routing, and legal posture of the resulting action respect those tags. It sits in the visitor identification and signal orchestration category alongside vendors such as Warmly, Vector, and Common Room.

Sources

IAB, Understanding the Language of Data, https://www.iab.com/blog/understanding-the-language-of-data/

IAB, Defining the Data Stack (PDF), https://www.iab.com/wp-content/uploads/2018/12/IAB_Defining-the-Data-Stack_2018-12-05_Final.pdf

Foundry, First, second, and third-party intent data, https://foundryco.com/blog/blog-first-party-second-party-third-party-intent-data-whats-the-difference/

Bombora, Company Surge, https://bombora.com/company-surge/

Bombora, GDPR/ITP/cookieless future, https://bombora.com/blog/bombora-customers-gdpr-itp-cookieless-future/

Madison Logic, What is B2B Intent Data, https://www.madisonlogic.com/resource/what-is-b2b-intent-data/

Dreamdata, Measuring G2 intent data impact, https://dreamdata.io/blog/benchmarks-report-measuring-g2-intent-data-impact-b2b-buyer-journeys

Common Room, Signals, https://www.commonroom.io/product/signals/

Warmly, Identify Anonymous Website Visitors, https://www.warmly.ai/p/solutions/use-cases/website-visitor-identification

Perkins Coie, Compliance Next Steps: B2B Data in California, https://perkinscoie.com/insights/update/compliance-next-steps-employment-and-b2b-data-california

California Attorney General, CCPA, https://oag.ca.gov/privacy/ccpa

ICO, UK GDPR Lawful Basis, https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/lawful-basis/