Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.notte.cc/llms.txt

Use this file to discover all available pages before exploring further.

How do you want to get started?

Browser Session

Connect to a cloud browser via CDP

Web Agent

Run an AI agent on the browser

Scraping

Extract structured data from a page

Prerequisites

Before you start, you’ll need an API key and a local environment configured to use it.
1

Get an API key

Create an account on the Console to get your API key.
2

Export your API key

export NOTTE_API_KEY=your_api_key_here
3

Install the SDK

pip install notte-sdk playwright
Both Python and JavaScript quickstarts below use the official Notte SDK. Start with Browser Session if you want the shortest path to a working integration.

Browser Session

Create a cloud browser session and control it with Playwright over CDP.
from playwright.sync_api import sync_playwright
from notte_sdk import NotteClient

client = NotteClient()

with client.Session(open_viewer=True) as session:
    cdp_url = session.cdp_url()

    with sync_playwright() as p:
        browser = p.chromium.connect_over_cdp(cdp_url)
        page = browser.contexts[0].pages[0]
        page.goto("https://www.google.com")
        page.screenshot(path="screenshot.png")

Learn more about Sessions

Proxies, anti-detection, captcha solving, and more.

Web Agent

Run an AI agent that browses and completes tasks autonomously.
from notte_sdk import NotteClient

client = NotteClient()

with client.Session(open_viewer=True) as session:
    agent = client.Agent(session=session, max_steps=5)

    response = agent.run(
        task="Browse on Notte docs and book a demo for me",
        url="https://docs.notte.cc"
    )
    print(response)

Learn more about Agents

Models, reasoning, structured output, and more.

Scraping

Extract structured data from any page using AI.
from pydantic import BaseModel
from notte_sdk import NotteClient

class HackerNewsPost(BaseModel):
    title: str
    url: str
    points: int
    author: str

class HackerNewsFeed(BaseModel):
    posts: list[HackerNewsPost]

client = NotteClient()

result = client.scrape(
    url="https://news.ycombinator.com",
    response_format=HackerNewsFeed,
    instructions="Extract the top 5 posts from the front page"
)

for i, post in enumerate(result.data.posts, 1):
    print(f"{i}. {post.points} - {post.title}")

Learn more about Scraping

Structured extraction, selectors, and more.