ScanPipelineResult
The result returned byraxe.scan(). Contains all detection results from L1 (rules) and L2 (ML) layers.
Properties
| Property | Type | Description |
|---|---|---|
has_threats | bool | True if any threats detected |
severity | str | None | Highest severity: “critical”, “high”, “medium”, “low”, “info” |
total_detections | int | Total threats across L1 and L2 |
detections | list[Detection] | All L1 Detection objects |
duration_ms | float | Total scan duration in milliseconds |
should_block | bool | True if policy decision is to block |
l1_detections | int | Count of L1 rule detections |
l2_detections | int | Count of L2 ML predictions |
l1_duration_ms | float | L1 processing time |
l2_duration_ms | float | L2 processing time |
text_hash | str | SHA-256 hash of scanned text |
policy_decision | BlockAction | Policy action: ALLOW, WARN, BLOCK |
metadata | dict | Additional metadata (see below) |
action_taken | str | Action taken: “allow” or “block” |
Metadata (Multi-Tenant)
When scanning withtenant_id/app_id, metadata includes policy attribution:
| Key | Type | Description |
|---|---|---|
effective_policy_id | str | Which policy was applied |
effective_policy_mode | str | Policy mode: “monitor”, “balanced”, “strict” |
resolution_source | str | Source: “request”, “app”, “tenant”, “system_default” |
tenant_id | str | Tenant ID used |
app_id | str | App ID used |
event_id | str | Unique event ID for audit |
Boolean Evaluation
The result evaluates toTrue when safe (no threats):
Example
Multi-Tenant Example
Detection
A single threat detection from L1 rules.Properties
| Property | Type | Description |
|---|---|---|
rule_id | str | Rule identifier (e.g., “pi-001”) |
rule_version | str | Version of the matching rule |
severity | Severity | Severity enum level |
confidence | float | Confidence score (0.0-1.0) |
category | str | Threat category (e.g., “prompt_injection”) |
matches | list[Match] | Pattern matches that triggered detection |
message | str | Human-readable detection message |
explanation | str | None | Optional detailed explanation |
risk_explanation | str | Why this pattern is dangerous |
remediation_advice | str | How to fix/mitigate the threat |
detection_layer | str | Detection source: “L1”, “L2”, or “PLUGIN” |
layer_latency_ms | float | Time taken by this detection layer |
is_flagged | bool | True if matched by FLAG suppression |
suppression_reason | str | None | Reason if flagged by suppression |
Computed Properties
| Property | Type | Description |
|---|---|---|
match_count | int | Number of pattern matches |
threat_summary | str | Summary like “CRITICAL: pi-001 (confidence: 0.95)“ |
versioned_rule_id | str | Format “pi-001@1.0.0” |
Example
Severity
Enumeration of threat severity levels.Values
| Value | String | Description |
|---|---|---|
Severity.CRITICAL | ”critical” | Immediate threat, block |
Severity.HIGH | ”high” | Serious threat, block or flag |
Severity.MEDIUM | ”medium” | Moderate threat, flag |
Severity.LOW | ”low” | Minor concern, log |
Severity.INFO | ”info” | Informational only |
Comparison
Severities are comparable by their risk level:String Access
Get the string value from the enum:Filtering Results
By Severity
By Confidence
By Category
By Layer
Policy Actions
Theshould_block property and policy_decision reflect the configured policy:
