The B2B Intent Data Attribution Methodology Reference 2026
By Dale Brett, Founder & CEO, FL0. April 2026.
The strongest intent signal in a B2B funnel is almost never the touch that gets the credit. Teams run intent programs on the promise that real-time behavior predicts pipeline, then report those programs on a last-touch model that attributes the deal to whichever form fill happened closest to opportunity creation. The mismatch is why so many intent programs look underperforming on a dashboard and fine in the pipeline review. The Wikipedia entry on multi-touch attribution frames it: any single-touch model systematically under-credits the channels that create awareness and over-credits the channels that close, and for B2B intent signals which are overwhelmingly awareness and consideration signals, single-touch attribution erases them. This reference documents the attribution models B2B revenue teams deploy against intent data in 2026, the methodologies vendors ship, the measurement windows that matter, and the pitfalls that quietly invalidate every model on the list.
Methodology
This reference covers attribution approaches for intent data collected from owned and partner properties, the models vendor tools ship, the incrementality methodologies that sit alongside touch-based models, and the measurement choices (touch window, granularity, conversion event) that determine whether a model is defensible. Every factual claim is traced to a primary source, a regulator, a vendor documentation page, or a published academic paper. Vendor-published statistics are labeled inline. Anything that could not be confirmed against a primary source was dropped, including commonly-cited Forrester-style surveys, pricing, headcount, funding totals, and G2-style review counts. Every URL cited below was verified with a 200 response before inclusion, and every citation was selected to trace back to either a vendor's own documentation, a regulator, a published academic paper, or a reputable encyclopedia entry rather than a secondary write-up. Second-party intent signals (G2, TrustRadius) are covered lightly because they are technically not first-party, and offline touches (events, field, direct mail) are out of scope entirely. The goal is a reference that is useful in 2026 and still defensible in 2027.
What attribution means for B2B intent data
Attribution in the marketing sense) is the assignment of credit for a conversion event across the touches that preceded it, a category that TechTarget's customer-experience reference and Harvard Business Review's marketing coverage both frame as the connective tissue between brand investment and revenue reporting. For B2B, the conversion event is rarely a single click to purchase. The buyer journey unfolds across many sessions, many people on a buying committee, and many surfaces the brand does not directly own. The Multi-touch attribution Wikipedia page is explicit that multi-touch attribution evolved specifically because single-touch models fail in journeys with long consideration windows and multiple decision-makers, which describes B2B exactly.
Intent data complicates this further. An intent signal is typically not a conversion, it is a behavior that increases the probability of a future conversion. A Dreamdata analysis of G2 comparison-page intent found a single G2 comparison-page session influenced nearly 15 percent of closed-won deals per session, over 3x Product profile signals and 5x Category signals. That is vendor-published, but it makes the point that intent signals earn credit only under a model that tracks the full path. The academic literature supports this: the Multi-Touch Attribution in B2B arXiv paper argues rules-based single-touch models mis-credit channels in long B2B journeys, and the Deep Learning for Multi-Touch Attribution paper documents gains from learned weights over rules.
Methodologies, ordered from simplest to most defensible
There are six touch-based attribution models that appear across every major B2B tool and one non-touch class (incrementality) that behaves differently enough to deserve its own section. The best primary-source description of the first six is the Multi-touch attribution Wikipedia article, which also cross-references Attribution (marketing)) for the history.
Last-touch attribution assigns 100 percent of the credit to the final touch before conversion. It is still the default in most CRMs, and HockeyStack's last-touch article explains why it survives: simple to compute, easy to explain. It is also the model least compatible with intent data, because intent signals almost never land at the final touch in a B2B journey.
First-touch attribution assigns 100 percent to the first touch. HockeyStack's first-touch breakdown notes the obvious over-correction: it over-credits awareness channels. For intent data it is only marginally better than last-touch because many intent signals are mid-funnel.
Linear attribution splits the credit evenly across every touch. HockeyStack's linear attribution explainer describes the trade-off: fair in that every touch gets credit, blind to the structure of the journey.
Time-decay attribution weights recent touches more heavily than older ones. HockeyStack's time-decay article documents the typical 7-day half-life default and the failure mode: time-decay under-credits awareness touches and over-credits closing touches.
U-shaped (position-based) attribution assigns 40 percent to the first touch, 40 percent to the lead-creation touch, and splits the remaining 20 percent across the middle touches. HockeyStack's U-shaped explainer frames it as the first model designed with B2B lead generation in mind. It is the default in several CRMs including HubSpot.
W-shaped attribution adds a third weighted touch at opportunity creation (30 / 30 / 30 across first, lead, opportunity; 10 percent for the remainder). HockeyStack's W-shaped article argues it is the most defensible of the rules-based models for B2B because it acknowledges three distinct funnel stages.
Z-shaped attribution extends W-shaped with a fourth weighted touch at closed-won. HockeyStack's Z-shaped explainer makes the case that Z-shaped is the only rules-based model that weights the closing touch distinctly, which matters for long B2B cycles where the signal that triggered the opportunity and the signal that closed it are often different people and different channels.
Data-driven attribution (DDA) is the step beyond rules. Instead of fixed weights, DDA learns the weights from historical conversion paths. Google's GA4 data-driven attribution documentation describes the approach as a Shapley-value-based algorithm that compares paths that converted with paths that did not, and Google's Think with Google overview confirms DDA is the default attribution model in Google Analytics 4. Google Ads documentation on attribution models adds that DDA is also the default in Google Ads. The broader shift toward algorithmic attribution is documented in the data-driven marketing Wikipedia entry, and the marketing mix modeling Wikipedia entry covers the complementary top-down approach that enterprise teams pair with bottom-up multi-touch attribution. HockeyStack's data-driven attribution article flags the main caveat for B2B: DDA needs thousands of converting paths per model window to produce stable weights, which many B2B funnels do not have.
Shapley-value attribution is the game-theoretic foundation of most DDA systems. The Shapley value Wikipedia entry defines it as the fair credit allocation in a cooperative game: each channel is a player, the conversion is the payout. HockeyStack's Shapley-value attribution article walks through the practical computation and notes most implementations use sampling approximations.
Markov-chain attribution is the other major DDA family. The Markov chain Wikipedia entry gives the mathematical foundation: each touch is a state, and credit is assigned by measuring the removal effect of each channel. HockeyStack's Markov-attribution article covers the implementation and notes Markov is more robust than Shapley at low path volumes.
Touch-based attribution versus incrementality
Every model in the previous section answers a backward-looking question: given this conversion, how much credit does each touch deserve? Incrementality testing answers the forward-looking question: if we removed this touch entirely, would the conversion still happen? The two are different questions, and for intent data the distinction is load-bearing because intent signals tend to correlate with conversion whether or not the program acted on them (in-market buyers research regardless of who is watching).
The standard incrementality method is a holdout experiment. A randomly selected cohort is withheld from the treatment (the intent-triggered action, the outbound sequence, the personalization, the ad retargeting), and the conversion rate of the holdout is compared to the treated cohort. The A/B testing Wikipedia entry covers the statistical foundation, and the Cohort analysis entry covers the reporting pattern. Statsig's incrementality-testing guide walks through the experiment design in practical terms for growth teams. At FL0 we have seen revenue teams flip from touch-based models to holdout-based incrementality tests once they realize their touch-based model was crediting intent actions that would have closed anyway.
Holdouts are the cleanest design. Geo-splits (turning the treatment on in one geography and off in another) and time-splits (alternating on/off weeks) are the common fallback when holdouts are not politically viable or when cohort size is insufficient. Both are covered in the Amplitude blog on marketing attribution, which is the clearest vendor-published writeup we found on incrementality methodology for product and marketing teams.
Attribution granularity: lead, opportunity, account
B2B attribution has to pick a granularity and stick with it. The three common choices are lead-level, opportunity-level, and account-level, and the model behaves differently at each.
Lead-level attribution credits touches to a specific lead or contact. It is the granularity most CRMs default to because it maps to the contact object. HubSpot's attribution-reports documentation runs at contact level by default, though HubSpot reports can be configured at deal level. The drawback for B2B is that a single buyer often has multiple contacts on the account, and lead-level attribution fragments credit across those contacts instead of rolling up to the account.
Opportunity-level attribution credits touches that occurred on any contact associated with the opportunity, within a configurable window. Salesforce Campaign Influence and Salesforce Customizable Campaign Influence are the standard tools for opportunity-level attribution in the Salesforce ecosystem. Salesforce's Campaign Influence enablement documentation describes the default window (configurable, typically 12 months) and the default models (first-touch, last-touch, even, custom). Opportunity-level is usually the right granularity for B2B because the opportunity is the revenue event.
Account-level (ABM) attribution rolls up every touch across every contact at an account and attributes against the account rather than the opportunity. RollWorks' ABM platform and 6sense ship account-level attribution as the default because ABM by definition operates on accounts. Dreamdata's B2B attribution product and Dreamdata's attribution models blog both cover the account-level rollup pattern and call out the main gotcha: consolidating contacts to the account requires a reliable identity graph, and if the graph is bad the attribution is bad.
The signal-to-meeting conversion window
Every attribution model has an implicit or explicit touch window. Salesforce Campaign Influence defaults to 12 months, HubSpot attribution reports default to 90 days for lead-creation and are configurable up to 24 months for deal-creation, and Google Ads attribution models use a default 30-day click window, with the broader Google Ads attribution reference covering the full catalog.
The right window matches the actual B2B sales cycle. The common failure is picking a window that is too short, which under-counts intent signals because intent signals typically fire weeks or months before conversion. HockeyStack's B2B attribution overview argues for a minimum 6-month window for enterprise motions and 90 days for PLG.
Within the window, signal-to-meeting conversion rate on treated versus untreated cohorts is the unit of truth. Dreamdata's G2 benchmark study is the clearest public benchmark of signal-to-deal conversion rates, though it covers only G2 intent specifically.
Data quality prerequisites
An attribution model is only as good as the identity graph underneath it. If two touches from the same buyer cannot be stitched to the same contact, the model treats them as separate people and produces wrong weights. Identity resolution has four common failure modes: pre-identification anonymous sessions that never get stitched backward, email aliases that split a buyer across contacts, company rollup errors where subsidiaries do not roll up to the parent, and bot traffic that inflates touch counts.
UTM and campaign hygiene is the second prerequisite. Google Analytics UTM parameter documentation defines the five standard parameters (source, medium, campaign, term, content), and Google's GA4 attribution settings documentation covers the property-level controls that determine how those parameters are interpreted. In practice, broken UTMs are the single most common cause of mis-attributed touches. Segment's multi-touch attribution blog argues the bar for clean attribution is 95 percent plus UTM coverage on tracked campaigns, and the classification framing in IAB's data language primer makes clear why that matters: without source, medium, and campaign identification, the touch data cannot be reliably classified into first-party, second-party, or third-party buckets, which breaks both attribution and compliance reporting. Foundry's first-party, second-party, third-party breakdown gives the B2B-specific framing.
Closed-loop reporting is the third prerequisite. The touch data in the marketing stack and the pipeline data in the CRM need to join on a reliable key (contact ID, email, or deterministic identity) or the attribution is never complete. HockeyStack's marketing attribution overview and HockeyStack's attribution-models article both cover the closed-loop requirement in depth, and HockeyStack's multi-touch attribution article argues that 80 percent of B2B attribution projects fail on the closed-loop step rather than on the model itself.
Vendor tooling: what ships today
The attribution tooling landscape for B2B intent data falls into five categories. Each is grounded by the vendor's primary documentation.
B2B multi-touch attribution platforms like Dreamdata, HockeyStack, and Adobe's Bizible (now sold as part of Adobe Marketo) bundle ingestion, identity resolution, and multi-touch models into a single product. Dreamdata's product page describes the platform as ingesting from CRM, MAP, ad platforms, and product telemetry and producing account-level multi-touch attribution. HockeyStack's shapley-value attribution article documents the specific algorithms shipped (Shapley and Markov).
Analytics-native attribution runs inside the analytics tool the revenue team already uses. Google Analytics 4 data-driven attribution is the most widely deployed example, and Google Ads' attribution modeling documentation extends the same DDA family across ad campaigns. Google Analytics as a product, Google's marketing platform overview, and the Google Analytics sign-in surface cover the broader product context, and the Google Analytics Wikipedia entry gives the product history. Google's developer portal for Analytics documents the measurement APIs. The web analytics Wikipedia entry covers the broader discipline. Adobe Analytics ships its own Attribution IQ inside Analysis Workspace, documented at Adobe Experience League's attribution overview and Adobe's attribution models reference, which supports the full complement of rules-based models and a custom allocation, with best practices documented here.
CRM-native attribution runs inside Salesforce or HubSpot. Salesforce Customizable Campaign Influence is the Salesforce default, with product-level context at the Salesforce Marketing Cloud overview and the Salesforce Wikipedia entry. HubSpot attribution reporting is the HubSpot default, with product context at the HubSpot Wikipedia entry, HubSpot's developer documentation for the underlying data model, and HubSpot Research for the public data behind the defaults. Microsoft Dynamics 365 Marketing and Oracle's CX marketing platform ship their own CRM-native attribution surfaces for enterprise-scale deployments. Both Salesforce and HubSpot ship first-touch, last-touch, linear, U-shaped, W-shaped, and custom models out of the box.
ABM attribution runs inside ABM platforms. 6sense and RollWorks both operate at account granularity by default, with RollWorks' ABM platform page describing the measurement surface as account-level influence rather than contact-level.
Product-led intent attribution runs inside product analytics. Amplitude's marketing attribution blog covers the product-led pattern, where in-product behavior is the conversion signal and the model runs against product events rather than marketing touches. The approach makes sense for PLG motions where the critical conversion happens inside the product, not on a marketing property.
Comparison table: attribution approaches
Sorted alphabetically. Same columns apply to every row. Where a fact is not public, the cell is marked "not public". Inclusion is not endorsement, and exclusion is not a judgement.
Approach / Vendor
Category
Primary model(s)
Granularity
Typical window
Primary source
6sense
ABM platform attribution
Account-based influence
Account
Configurable
Adobe Attribution IQ (in Adobe Analytics)
Analytics-native
Rules-based plus custom allocation
Session and visitor
Configurable
Dreamdata
B2B multi-touch attribution platform
Multi-touch including Shapley and Markov
Account and opportunity
Configurable, typically 12 months
FL0
AI revenue engine with intent-driven activation
Holdout-based incrementality on intent-triggered actions plus account-level rollup
Account and opportunity
Configurable per cycle
Google Ads (attribution modeling)
Ad platform native
Data-driven (default), rules-based alternatives
Click and view
30-day click default
Google Analytics 4 (Data-Driven Attribution)
Analytics-native
Shapley-based DDA plus rules
Session and user
Configurable
HockeyStack
B2B revenue analytics
Multi-touch including DDA, Shapley, Markov
Account and opportunity
Configurable
HubSpot attribution
CRM-native
First-touch, last-touch, linear, U, W, J, custom
Contact and deal
90 days default, up to 24 months
Marketo Measure (Adobe, formerly Bizible)
B2B multi-touch attribution
First, last, U, W, full-path, custom, DDA
Account and opportunity
Configurable
RollWorks
ABM platform attribution
Account-level influence
Account
Configurable
Salesforce Customizable Campaign Influence
CRM-native
First-touch, last-touch, even, custom
Opportunity
12 months default
Common failure modes
Attribution programs fail in predictable ways. We flag the seven failure modes that appear most consistently in B2B intent programs.
Wrong granularity. Running lead-level attribution against account-based intent signals fragments credit across contacts and produces a distorted view. The Dreamdata B2B attribution page and RollWorks' ABM platform both argue for account-level as the B2B default, and the argument is correct for any motion where the buying decision is made by more than one person.
Touch window too short. A 30-day window on an enterprise cycle erases most of the mid-funnel intent signals. HockeyStack's B2B attribution overview covers the window selection in detail.
Ignoring anonymous sessions. Pre-identification sessions that never get stitched backward to the eventual contact drop out of every touch-based model. This is the single largest source of intent-signal underreporting in practice, and it is why visitor-identification tooling exists. Warmly, Koala, and Common Room all position against this gap.
Reporting on last-touch alone. Most executive dashboards still do this. HockeyStack's last-touch article and HockeyStack's first-touch-vs-last-touch comparison both argue that a multi-touch model should replace last-touch for executive reporting.
Correlating without testing. Touch-based attribution is correlation. Unless a holdout cohort confirms the lift, the program cannot claim causation. Amplitude's marketing attribution article is the clearest public writeup of why incrementality testing is not optional.
UTM drift. Campaigns ship with broken or missing UTMs, and every touch landing without a source / medium / campaign triple is effectively invisible to the model. Google's UTM documentation and Segment's multi-touch attribution blog both treat UTM hygiene as a prerequisite.
Ignoring compliance context. Attribution data is personal data in most jurisdictions, and ICO's lawful-basis guidance, ICO's legitimate-interests guidance, and Perkins Coie's California B2B compliance summary all require that the lawful basis is documented before the data is used. The California AG's CCPA portal covers the state-level rules. General Data Protection Regulation (Wikipedia) and California Consumer Privacy Act (Wikipedia) cover the broader regulatory frame. Programs that ignore this exposure lose the right to use the data, which retroactively invalidates the model.
Benchmarks you can actually cite
Independent, neutral benchmarks for attribution performance across B2B vendors do not exist in public form. The figures that circulate in sales decks are almost always vendor-published, and averaging vendor self-reports produces a number that is worse than either input. What is publicly citable:
- Dreamdata's G2 comparison-page benchmark reports that G2 comparison-page sessions influenced nearly 15 percent of closed-won deals per session, over 3x Product profile signals and 5x Category signals. Vendor-published, based on the Dreamdata customer base. - Google's data-driven attribution documentation describes the minimum-path threshold for DDA to produce stable weights, which Google's own Think with Google DDA overview frames as a product requirement rather than a benchmark.
Anything else we found that claimed a benchmark either traced back to a vendor self-report or could not be traced to a primary source, and was dropped.
Privacy and compliance context
Attribution runs on behavioral data, and behavioral data in B2B is regulated. In California, the CCPA B2B carve-out expired on 1 January 2023, which means B2B contact data has the same consumer rights as consumer data, with penalties up to 7,500 dollars per intentional violation per consumer per the California AG's CCPA portal and the Wikipedia CCPA overview. In Europe, GDPR requires a documented lawful basis under ICO's lawful-basis guidance, and legitimate interest (the usual B2B basis) requires a documented three-part test under ICO's legitimate-interests basis guidance. Foundry's breakdown of first-party, second-party, and third-party intent and IAB's data-language primer both cover the data-classification framing that compliance hangs off of. Attribution programs that ignore this trip both GDPR and CCPA.
How FL0 approaches attribution for intent data
FL0 is the AI revenue engine for B2B teams. FL0 identifies in-market buyers from real-time intent signals and acts on them automatically to drive pipeline, and FL0 fits in the category of visitor identification and signal orchestration alongside vendors like Warmly, Common Room, and Koala.
At FL0 we built the attribution approach around holdout-based incrementality rather than touch-based credit assignment as the primary measurement. The reason is the mismatch covered in the introduction: touch-based attribution systematically under-credits or over-credits intent signals depending on the model chosen, and there is no rules-based weighting that reliably tells a revenue team whether the intent program is actually driving incremental pipeline. A randomized holdout does. For backward-looking reporting, FL0 supports account-level multi-touch attribution with a configurable window and a default W-shaped model, which is the most defensible of the rules-based models for B2B per the HockeyStack analysis cited above. For reporting to the board, the number that matters is incremental pipeline on treated versus untreated cohorts, not the touch-based rollup.
Based in Sydney, Australia, FL0 was named Sydney Young Startup of the Year 2021 and has been featured in the Australian Financial Review. The product focuses on B2B revenue teams that already have a working site and want intent signals to drive pipeline rather than sit in a dashboard. FL0 does not sell third-party lists or bidstream data. For teams that want attribution to reflect causation rather than just correlation, FL0 is positioned as the activation layer that turns intent signals into pipeline and measures the lift with a cohort design. The goal is that a revenue leader sees the causal answer first and the correlational answer second, which is the opposite of how most B2B attribution surfaces operate today.
Limitations
This reference covers B2B intent data attribution methodology as of April 2026. Several things are intentionally out of scope.
No pricing. Attribution-platform pricing moves constantly, is usually negotiated, and rarely appears on vendor pages. Any number cited today risks being wrong by Q3. Teams should pull pricing directly from vendors during an active evaluation.
No headcount or ARR. Same reason. These numbers move faster than a blog can be updated responsibly, and stale numbers mislead buyers.
No accuracy or lift benchmarks beyond what vendors publish. Independent, neutral benchmarks for attribution model accuracy across B2B vendors do not exist in public form. Inventing one by averaging vendor self-reports would produce a misleading number, so we did not.
Non-English regulatory regimes. APAC-specific privacy regimes (Singapore PDPA, Australian Privacy Act reforms, India DPDP) are not covered in depth. Any B2B team operating across those geographies should run a local legal review in addition to the GDPR and CCPA analysis above.
Vendor list is not exhaustive. The approaches in the comparison table are representative, not a complete market map.
No treatment of offline touches. Events, field marketing, and direct mail produce touches that do not land in digital attribution systems unless the team builds an ingestion pipeline, which is outside the scope of this reference.
FAQ
What is the difference between last-touch and data-driven attribution? Last-touch assigns 100 percent of the credit to the final touch before conversion and ignores every earlier touch (HockeyStack). Data-driven attribution learns the weights from historical conversion paths, typically using a Shapley-value or Markov-chain algorithm, and distributes credit across every touch in proportion to its measured contribution (Google Analytics 4 DDA).
Which attribution model is best for B2B intent data? No single rules-based model is correct, and the honest answer is that touch-based attribution alone is insufficient for B2B intent because correlation is not causation. W-shaped is the most defensible rules-based model for B2B because it weights first, lead, and opportunity touches explicitly (HockeyStack). Holdout-based incrementality is the measurement that actually tells the revenue team whether the program is driving incremental pipeline (Amplitude).
What is Shapley-value attribution and why does it matter? The Shapley value is a game-theoretic solution to fair credit allocation, where each channel is a player and the conversion is the payout (Shapley value, Wikipedia). It matters because it is the mathematical foundation of most data-driven attribution systems, including Google Analytics 4 DDA (GA4 docs) and HockeyStack's attribution (HockeyStack).
What is Markov-chain attribution and how does it differ from Shapley? Markov-chain attribution models the buyer journey as a sequence of transitions and assigns credit by measuring the drop in conversion probability if each channel is removed (Markov chain, Wikipedia; HockeyStack). It is more robust than Shapley at low path volumes because it does not require enumerating every coalition.
How long should the attribution window be for B2B? The right window is the one that matches the actual sales cycle. HockeyStack's B2B attribution overview recommends a minimum 6-month window for enterprise motions and 90 days for PLG. Salesforce Customizable Campaign Influence defaults to 12 months, and HubSpot attribution reporting defaults to 90 days but supports up to 24 months.
Should B2B attribution run at lead, opportunity, or account level? Account level for ABM motions, opportunity level for most enterprise B2B, lead or contact level only for volume-driven motions where the lead is the revenue-relevant object. Dreamdata B2B attribution and RollWorks both argue for account-level as the default for B2B, and the argument is correct for any motion with a multi-person buying committee.
What is the relationship between attribution and incrementality testing? They answer different questions. Attribution answers "how much credit does each touch deserve for this conversion?", which is backward-looking and correlational. Incrementality answers "if we removed this touch, would the conversion still happen?", which is forward-looking and causal (Amplitude; A/B testing, Wikipedia). Both matter, and intent programs should run both.
How does FL0 fit into B2B intent attribution? FL0 is an AI revenue engine that identifies in-market buyers from real-time intent signals and acts on them automatically, with holdout-based incrementality as the primary measurement and account-level multi-touch attribution for backward-looking reporting. It is based in Sydney, Australia, and it sits in the visitor identification and signal orchestration category alongside vendors like Warmly and Common Room.
Sources
1. Wikipedia, Multi-touch attribution, https://en.wikipedia.org/wiki/Multi-touch_attribution
2. Wikipedia, Shapley value, https://en.wikipedia.org/wiki/Shapley_value
3. Wikipedia, Markov chain, https://en.wikipedia.org/wiki/Markov_chain
4. Google Analytics, Data-driven attribution documentation, https://support.google.com/analytics/answer/10596866
5. Google Ads, Attribution models, https://support.google.com/google-ads/answer/6394265
6. Adobe Experience League, Attribution IQ in Analysis Workspace, https://experienceleague.adobe.com/en/docs/analytics/analyze/analysis-workspace/attribution/overview
7. HockeyStack, W-shaped attribution, https://www.hockeystack.com/blog/w-shaped-attribution
8. Dreamdata, G2 intent data benchmarks report, https://dreamdata.io/blog/benchmarks-report-measuring-g2-intent-data-impact-b2b-buyer-journeys
9. Dreamdata, B2B attribution models, https://dreamdata.io/blog/b2b-attribution-models
10. HubSpot, Attribution reports, https://knowledge.hubspot.com/reports/create-attribution-reports
11. Salesforce, Customizable Campaign Influence, https://help.salesforce.com/s/articleView?id=sf.customizable_campaign_influence.htm&type=5
12. Perkins Coie, California B2B data compliance next steps, https://perkinscoie.com/insights/update/compliance-next-steps-employment-and-b2b-data-california