Skip to main content
New to RAXE? Start with the Quickstart and learn how detection works.

Installation

pip install raxe[wrappers]

Basic Usage

basic.py
from raxe import RaxeOpenAI

# Drop-in replacement for OpenAI client
client = RaxeOpenAI(api_key="sk-...")

# Threats automatically scanned before API call
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "What is AI?"}]
)

print(response.choices[0].message.content)

How It Works

  1. User sends request through RaxeOpenAI
  2. RAXE scans the prompt before calling OpenAI
  3. If threat detected → RaxeBlockedError raised
  4. If safe → Request forwarded to OpenAI
  5. Response returned normally

Benefits

Save Money

Threats blocked before API call - no wasted tokens

Zero Code Changes

Just change the import statement

Full Compatibility

All OpenAI features work normally

Automatic Protection

Every request scanned automatically

Error Handling

error_handling.py
from raxe import RaxeOpenAI, RaxeBlockedError, RaxeException

client = RaxeOpenAI(api_key="sk-...")

try:
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": user_input}]
    )
    return response.choices[0].message.content

except RaxeBlockedError as e:
    # Threat was detected and blocked before API call
    print(f"Blocked: {e.severity}")
    print(f"Rule: {e.rule_id}")
    return "Your request was blocked for security reasons."

except RaxeException as e:
    # Other RAXE errors (config, initialization)
    logger.error(f"RAXE error: {e}")
    # Decide: fail open or fail closed

Configuration

config.py
from raxe import RaxeOpenAI

client = RaxeOpenAI(
    api_key="sk-...",

    # RAXE configuration
    raxe_l1_enabled=True,        # Enable rule-based detection (515+ patterns)
    raxe_l2_enabled=True,        # Enable ML detection (neural classifier)
    raxe_block_on_threat=True,   # Raise RaxeBlockedError on threat detection
)

Streaming Support

streaming.py
from raxe import RaxeOpenAI

client = RaxeOpenAI(api_key="sk-...")

# Streaming works normally - prompt scanned before stream starts
stream = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True  # Full OpenAI streaming support
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

All Messages Scanned

The wrapper scans all messages in the conversation:
multi_turn.py
response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": "You are helpful"},  # Scanned
        {"role": "user", "content": "What is AI?"},        # Scanned
        {"role": "assistant", "content": "AI is..."},      # Scanned
        {"role": "user", "content": "Tell me more"}        # Scanned
    ]
)
# All messages combined and scanned for threats

Migration Guide

migration.py
# Before - standard OpenAI client
from openai import OpenAI
client = OpenAI(api_key="sk-...")

# After - one import change, full protection
from raxe import RaxeOpenAI
client = RaxeOpenAI(api_key="sk-...")

# Everything else stays the same - full API compatibility!

Async Support

async.py
from raxe import AsyncRaxeOpenAI

# Async client for high-throughput applications
client = AsyncRaxeOpenAI(api_key="sk-...")

response = await client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello"}]
)

What’s Next