What happens when the very systems designed to protect websites end up blocking the good guys?
Our browser automation experience tells us the reality is far more complex than most people realise. When detection systems flag legitimate users by mistake, false positives can cause friction that slows down real customers and can drive away loyal customers if not handled carefully.
The Bot Detection Paradox
We live in an age where automated bot traffic has surpassed human-generated traffic for the first time in a decade, constituting 51% of all web traffic in 2024. Even more concerning, bad bots account for 37% of internet traffic.
So naturally, companies are scrambling to implement bot detection systems. This, however, causes issues. The same systems that catch malicious bots are increasingly catching legitimate automation tools:
- Research crawlers collecting public data for academic studies
- Price monitoring tools helping consumers find better deals
- SEO audit tools analysing website performance
- Accessibility tools helping disabled users navigate the web
- Business intelligence gathering for competitive analysis
SEO teams coordinating with third-party crawlers (like SEO audit tools that use Googlebot user agents) communicate with security teams to prevent false positives and ensure legitimate business activities aren't blocked.
The False Positive Nightmare
Let's paint a picture of what false positives look like in the real world.
For example a user being blocked from accessing their bank account or a social media business account because they are unjustly being considered a bot when they are travelling because of the frequency of requests or the location the request comes from.
But it's not just individual users. Legitimate businesses using automation for perfectly valid reasons are getting hammered:
- Market researchers trying to gather pricing data for competitive analysis
- Travel aggregators checking flight availability across multiple airlines
- News monitoring services tracking mentions for PR teams
- Academic researchers studying online discourse and social patterns
Why Good Actors Get Flagged
The technical reality is that bot detection systems look for patterns. And sometimes, legitimate automation exhibits patterns that look suspicious:
Speed and Efficiency
Power users who click, type, or switch pages rapidly mimic bot speed. If you're a marketer checking 20 product pages in a minute, you might trip the same alarms as a malicious scraper.
Network Infrastructure
VPNs route traffic through shared servers, often used by bots too. If your IP is tied to heavy traffic or a bot-heavy region, you're flagged.
Browser Fingerprinting
Modern bot detection analyses 70+ signals including canvas rendering, WebGL properties, audio API responses, and hardware specifications. A significant fraction of fingerprinting attributes are specially crafted challenges that aim to detect the presence of automation frameworks and headless browsers linked to bots. The problem? Many legitimate tools use these same frameworks.
Geographic Anomalies
Travelling abroad or using a VPN to access a site from a new region clashes with your usual profile, raising suspicion.
The Business Impact
This isn't just a technical problem. It's a business problem with real costs:
- Lost Revenue: When legitimate customers can't access services, conversion drops are measurable and immediate. DataDome's implementation of Device Check reduced false positives by 80% for one luxury brand, demonstrating that even industry leaders struggle with this balance.
- Compliance Issues: For marketing and sales teams dealing with lead fraud, TrustedForm Bot Detection delivers clear, measurable value by removing leads that may not meet TCPA consent requirements due to non-human submission. For lead generation companies, TCPA violations cost £400-£1,200 per incident. False negatives (missing bots) can result in compliance penalties, whilst false positives (blocking legitimate leads) directly reduce revenue.
- Operational Overhead: The massive resources required to manage false positives drain technical teams.
- Innovation Stifling: Companies afraid to implement automation that might get blocked hold back progress.
The Arms Race Nobody Wins
Here's what's particularly frustrating: sophisticated attacks create a tradeoff where aggressive blocking risks false positives against legitimate users, whilst permissive rules let sophisticated bots through.
We're stuck in this endless cycle:
- Bot detection gets more aggressive
- Legitimate tools get caught in the crossfire
- Good actors have to implement workarounds
- Those workarounds make them look more like bad actors
- Detection systems get even more aggressive
Why Good Actors Need Stealth Today
Here's the uncomfortable truth: in today's broken ecosystem, proving you're legitimate often makes you look like you're evading detection.
The Legitimacy Trap: Bot detection vendors face an impossible tradeoff: aggressive blocking creates false positives against legitimate users, whilst permissive rules let sophisticated bots through. Many vendors operate at 0.75% false positive rates (75 times higher than the 0.01% ideal) because tighter thresholds would block too many paying customers.
The paradox is stark: good actors need defensive capabilities not because they want to be deceptive, but because the current detection ecosystem gives them no choice.
At Notte, we provide anti-detection capabilities today because websites' broken detection systems require it. But we're also actively working towards a future where such capabilities won't be necessary.
The Path to Mutual Benefit
The reality is that both sides want the same thing:
- Websites want to protect their resources and users from malicious activity.
- Legitimate automation wants to provide value without causing harm.
The current approach of aggressive blocking hurts everyone. The ideal false positive rate is 0.01%, far below some vendors' 0.75%. Yet many systems operate with false positive rates 75 times higher than ideal.
Real Solutions for Real Problems
Here's what actually works:
1. Intent-Based Detection
Instead of just looking at behaviour patterns, systems need to understand intent. A price comparison bot checking 100 products has very different intent than a bot trying to buy up limited inventory. DataDome and other leading vendors are pioneering intent-based detection approaches that analyse why automation is happening, not just that it's happening. This represents a fundamental shift from pattern matching to purpose understanding.
2. Reputation Systems
Detection engines can maintain a user reputation based on previous user activity. The reputation can be impacted positively or negatively depending on the age of the account, the activity pattern, and whether or not some previous bot activity was observed with the user.
3. Adaptive Responses
Not every bot needs to be blocked. Some just need to be rate-limited or guided to use official APIs. Solutions like intelligent rate limiting and progressive friction provide graduated responses rather than binary block/allow decisions.
4. Cryptographic Verification Standards
The most promising development is the emergence of cryptographic bot verification. OpenAI has begun signing all Operator requests using HTTP Message Signatures (RFC 9421), allowing site owners to verify requests genuinely originate from their infrastructure without relying on brittle IP allowlists.
This approach, co-developed by OpenAI and Cloudflare as the Web Bot Auth standard, uses cryptographic signatures to verify bot identity. Bot operators publish public keys at /.well-known/http-message-signatures-directory, and requests include signed headers that websites can verify.
The benefits over traditional IP-based allowlists are clear:
- No infrastructure dependencies: You don't need to maintain up-to-date lists of IP ranges.
- Tamper resistance: Signatures are tied to request content and timestamp, making them impossible to forge.
- Flexible deployment: Bot identity is verified at the application layer, even if requests are routed through proxies or CDNs.
Cloudflare has integrated this directly into their Verified Bots program, and the IETF has established a working group to develop these standards further.
5. Human Override Capabilities
Under GDPR Article 22, data subjects have the right not to be subject to solely automated decisions that produce legal or similarly significant effects. When bot detection systems automatically block account access without human review capability, they may violate GDPR's requirement for human intervention rights. This frames aggressive bot detection as a regulatory risk, which organisations must address.
The Future We're Building
At Notte we envision a future where:
- Legitimate automation can operate without fear of being blocked.
- Websites can protect themselves without collateral damage.
- Good actors can prove their legitimacy without compromising their capabilities.
- The internet remains open for innovation whilst being protected from abuse.
This isn't just about technology. It's about changing the conversation from "how do we block bots?" to "how do we enable good automation whilst stopping bad actors?"
Notte's Approach
We're not just building better anti-detection technology. We're building a better approach to the entire problem. Yes, we provide sophisticated stealth capabilities, but we recognise these are defensive measures necessitated by today's broken ecosystem.
Our anti-detection playbook helps legitimate automation work respectfully by:
- Implementing intelligent rate limiting to minimise server load
- Respecting
robots.txtdirectives where appropriate - Providing configuration options for self-identification
- Building retry logic that backs off rather than hammers servers
As the ecosystem evolves towards intent-based detection and cryptographic verification standards like Web Bot Auth, Notte will enable our users to self-identify for legitimate use cases.
Moving Forward Together
The current state of bot detection is broken. But it doesn't have to stay that way.
The question isn't whether we need bot detection. We absolutely do. The question is how we implement it in a way that protects without destroying legitimate use cases.
The emergence of standards like Web Bot Auth shows the path forward. When OpenAI signs requests cryptographically, when Cloudflare verifies those signatures at the edge, when developers can implement transparent automation that proves its identity, we move closer to an ecosystem that enables the good whilst stopping the bad.
At Notte, we're providing the tools legitimate users need today whilst advocating for the standards the ecosystem needs tomorrow. Because at the end of the day, the internet should work for everyone: humans, good bots, and the legitimate automation that makes our digital lives better.
Learn more about our innovative approach to web automation and how our anti-detection playbook helps legitimate automation work respectfully.
