Salesforce’s Spring ’26 release quietly introduced one of the most meaningful upgrades to Marketing Cloud Engagement in years: Bot Scoring with Einstein Metrics Guard.
If you work in email, CRM, or lifecycle automation, you already know the pain of bot‑inflated engagement. Opens that aren’t real. Clicks that never happened. Journeys that fire when they shouldn’t. Reporting that looks great—until you realise it’s all noise.
This update finally gives marketers a way to separate human engagement from machine activity, and it’s going to reshape how we measure success in Marketing Cloud.
In this post, I’ll break down what Bot Scoring actually is, why it matters, and how you can start using it to clean up your data and make smarter decisions.
Why Bot Scoring Matters (and why it’s long overdue)
For years, email marketers have been dealing with a growing problem: Security filters and automated scanners are triggering opens and clicks before a human ever sees the email.
This creates a cascade of issues:
- Inflated open and click rates
- Journeys that trigger based on false engagement
- A/B tests that become meaningless
- Lead scoring models that reward the wrong behaviour
- Deliverability decisions based on corrupted signals
Einstein Metrics Guard has existed for a while as a way to flag suspicious engagement. But Spring ’26 takes it a step further with Bot Scoring—a more intelligent, more granular way to classify activity.
What Bot Scoring Actually Does
Bot Scoring uses Einstein’s behavioural models to assign a probability that an open or click came from a bot rather than a human.
Instead of a simple “bot/not bot” flag, Salesforce now provides a score that reflects the likelihood of automation.
This means:
- You can filter reporting by human‑only engagement
- You can exclude bot activity from Journey Builder decisions
- You can build segmentation that ignores inflated metrics
- You can finally trust your click‑through rates again
It’s not just a filter—it’s a confidence model that gives you control over how strict or lenient you want to be.
How Einstein Determines Bot Activity
Salesforce hasn’t published the full model (and they shouldn’t), but Bot Scoring typically looks at patterns like:
- Instantaneous opens or clicks after send
- Multiple clicks on every link in the email
- Repeated engagement from the same IP range
- Activity that happens at machine‑level speed
- Engagement that doesn’t match historical behaviour
In other words, it’s not guessing—it’s pattern recognition at scale.
Where You’ll See Bot Scoring in Marketing Cloud
Bot Scoring surfaces in several key places:
1. Email Studio Reporting
You can now filter engagement metrics to show:
- All activity
- Human‑only activity
- Bot‑likely activity
This alone will change how teams report performance.
2. Journey Builder
This is the big one. You can now choose to trigger decision splits based on human engagement only, preventing false positives from bot clicks.
3. Einstein Engagement Scoring
Bot activity is excluded from the scoring model, meaning your predictive segments become more accurate.
4. Data Views
New fields allow you to query bot‑likelihood directly in SQL, giving you full control over how you treat the data.
What This Means for Your Reporting and Strategy
Cleaner dashboards
Expect your open and click rates to drop—but in a good way. You’ll finally be looking at real engagement.
More reliable A/B tests
No more “Version B won because a firewall clicked every link.”
Better deliverability decisions
You can stop sending “we miss you” emails to contacts who never actually engaged.
More accurate personalisation
Journey logic becomes more trustworthy when it’s based on human behaviour.
How I Recommend Using Bot Scoring
Here’s how I’m advising teams to adopt it:
1. Start with reporting
Switch your dashboards to human‑only metrics. Get a baseline. Expect a shock.
2. Update your journeys
Any journey that relies on opens or clicks should be reviewed. Use human‑only engagement for decision splits.
3. Clean your engagement segments
Remove bot‑likely activity from:
- “Engaged in last 90 days” segments
- Lead scoring models
- Re‑engagement suppression lists
4. Update your SQL queries
If you’re using _Open or _Click data views, incorporate the new bot‑likelihood fields.
5. Communicate the change internally
Stakeholders will see lower numbers. Explain why that’s a good thing.
Final Thoughts
Bot Scoring with Einstein Metrics Guard is one of those updates that doesn’t look flashy on the surface, but it solves a real, everyday problem for marketers.
For anyone working in Salesforce Marketing Cloud—especially in high‑security industries like finance, healthcare, or B2B SaaS—this is a genuine step forward in data quality.
It’s going to help teams make better decisions, build smarter journeys, and finally trust their engagement metrics again.
If you’re already using it or planning to roll it out, I’d love to hear how you’re approaching it. And if you want help integrating Bot Scoring into your journeys or reporting, feel free to reach out.

Leave a Reply