Anti-Detection for AI Agents: A Practical Guide
November 12, 2025

Anti-Detection for AI Agents: A Practical Guide

Sam's profile picture
By Sam

Your agent works perfectly in testing. Then you deploy it against a real site and it's blocked in three requests. Familiar?

Web automation has an adversarial detection problem. Sites don't want bots, and they're getting better at spotting them. Most developers just want to toggle "stealth mode" and move on - and that's the right instinct. The problem? Under the hood, anti-detection isn't a single switch. It's a complex coordination of proxies, browser fingerprints, and behavioural patterns. Get one factor wrong and the whole thing falls apart.

Why Stealth Actually Matters

Sites detect automation through multi-signal fusion. They're not just checking if you're using Selenium, they're correlating:

  • IP signals: Is this a datacenter IP? Has it made 1,000 requests today?
  • Browser fingerprints: Do the headers match? Are the fonts plausible for this OS?
  • Behavioural traces: Is the click cadence robotic? Are you navigating too fast?

Fix one signal without the others and you still fail. A residential proxy with a botched browser fingerprint gets flagged. Perfect fingerprinting from a datacenter IP gets blocked. This is why "just use a proxy" doesn't work.

The Strategy: Coordinated Identity

The key insight is treating stealth as a persistent identity problem rather than a per-request problem.

Your agent shouldn't randomise everything on each session. That's detectable. Instead, each agent should maintain a consistent, plausible digital identity over time:

1. Sticky sessions per persona

One persona = one proxy + one browser profile + one set of cookies. That combination builds browsing history and looks human.

2. Aligned fingerprints

Your timezone, fonts, and platform strings must match your user agent. A macOS UA with Windows fonts is impossible. Sites know this.

3. Behavioural realism

Add human-like delays. Respect rate limits. Navigation timing matters as much as headers.

4. Iterate per site

Different sites have different detection. Keep a catalog of working configs per target and A/B test changes when something breaks.

main.js
from notte_sdk import NotteClient
from notte_sdk.types import NotteProxy

notte = NotteClient()

# Example stealth configuration
# this is just one possible configuration, with an obvious fingerprint
# rotating those values will raise your chances
stealth_config = {
    "solve_captchas": True,
    "proxies": [NotteProxy.from_country("us")],
    "browser_type": "firefox",
    "user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
    "viewport_width": 1920,
    "viewport_height": 1080,
}

# Try the stealth configuration
with notte.Session(**stealth_config) as session:
    result = session.observe(url="https://example.com")
    print("Success with fallback configuration")

What Notte Does

Notte handles the coordination problem. Instead of manually stitching together proxies, browser configs, and personas, you get:

Built-in residential proxies

US-based residential IPs out of the box. Need geo-targeting? NotteProxy.from_country('FR') switches to French IPs.

Browser fingerprint control

Choose your browser type (Chromium, Firefox, Chrome), set viewport dimensions, customise user agents. Toggle headless mode when you need visibility for debugging without sacrificing stealth.

Built-in CAPTCHA solving

Toggle solve_captcaps=True and let the system handle it. (Note: not all CAPTCHA types are supported. Some complex CAPTCHAs may still require manual intervention.)

Persistent personas

Personas come with realistic emails and phone numbers. One persona and one identity for repeatable results.

The architecture forces you to think correctly: identity first, then behavior, then tooling. You're not configuring a headless browser, but building a plausible digital human.

Looking Forward

As AI agents become more capable, the stealth problem will likely evolve in both directions. Detection systems will get more sophisticated, possibly analysing deeper behavioural patterns and session histories. But agents will also get better at genuinely human-like interaction: natural language form filling, realistic navigation paths, and adaptive timing.

The winners won't be the ones playing an arms race of evasion techniques. They'll be the ones building agents that operate more like humans naturally do, with persistent identities that accrue legitimate history, realistic interaction patterns, and respect for the sites they're accessing. Stealth becomes less about hiding and more about being a good actor on the web.

This is why getting the fundamentals right now matters. Building on a foundation of coordinated identity, proper fingerprinting, and behavioural realism positions you to scale as both detection and agent capabilities advance.

The Bottom Line

Anti-detection isn't a library you import. It's a discipline. Treat identity as a coordinated artifact. Network, browser, storage, and behavior all in sync. Log your sessions, iterate on failures, and build a catalog of working configs per site.

Get the fundamentals right: residential proxies, aligned fingerprints, persistent personas, and realistic behaviour. Layer them together, not separately — that's the difference between agents that work in demos and agents that run in production.