Show HN: A Prompt-Injection Firewall for AI Agents and RAG Pipelines

We built SafeBrowse — an open-source prompt-injection firewall for AI systems.

Instead of relying on better prompts, SafeBrowse enforces a hard security boundary between untrusted web content and LLMs.

It blocks hidden instructions, policy violations, and poisoned data before the AI ever sees it.

Features: • Prompt injection detection (50+ patterns) • Policy engine (login/payment blocking) • Fail-closed by design • Audit logs & request IDs • Python SDK (sync + async) • RAG sanitization

PyPI: pip install safebrowse

Looking for feedback from AI infra, security, and agent builders.

2 points | by AadilSayed 2 hours ago

2 comments

  • AadilSayed 2 hours ago
    Introducing SafeBrowse

    A prompt-injection firewall for AI agents.

    The web is not safe for AI. We built a solution.

    The problem:

    AI agents and RAG pipelines ingest untrusted web content.

    Hidden instructions can hijack LLM behavior — without humans ever seeing it.

    Prompting alone cannot solve this.

    The solution:

    SafeBrowse enforces a hard security boundary.

    Before: Web → LLM → Hope nothing bad happens

    After: Web → SafeBrowse → LLM

    The AI never sees malicious content.

    See it in action:

    Scans content before your AI Blocks prompt injection (50+ patterns) Blocks login/payment forms Sanitizes RAG chunks

  • AadilSayed 2 hours ago
    [dead]