POST /v1/harm/detect
Detect potentially harmful content in text.
Endpoint
POST https://api.glyphnet.io/v1/harm/detectAuthentication
Requires API key in X-API-Key header.
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
text | string | Yes | Text to analyze (max 10,000 characters) |
categories | array | No | Specific categories to check |
threshold | string | No | low, medium, high. Default: medium |
Available Categories
self_harm- Self-harm or suicide-related contentviolence- Violent or threatening contenthate_speech- Discriminatory or hateful contentillegal- Instructions for illegal activitiesexplicit- Sexually explicit contentmisinformation- Known false claims
Example Request
import requests
response = requests.post(
"https://api.glyphnet.io/v1/harm/detect",
headers={
"X-API-Key": "gn_live_your_key_here",
"Content-Type": "application/json"
},
json={
"text": "Your text to analyze here...",
"threshold": "medium"
}
)
result = response.json()
if result["harmful"]:
print(f"Harmful content detected: {result['categories']}")
else:
print("Content is safe")Response
Safe Content
{
"request_id": "req_harm_abc123",
"harmful": false,
"categories": [],
"confidence": 0.98,
"processing_time_ms": 23
}Harmful Content Detected
{
"request_id": "req_harm_def456",
"harmful": true,
"categories": [
{
"category": "violence",
"level": "high",
"confidence": 0.92,
"matched_phrases": ["..."],
"recommendation": "This content contains violent themes and should be blocked."
}
],
"confidence": 0.92,
"recommendation": "Block this content from being displayed.",
"processing_time_ms": 31
}Response Fields
| Field | Type | Description |
|---|---|---|
request_id | string | Unique request identifier |
harmful | boolean | True if harmful content detected |
categories | array | Detected harm categories |
confidence | number | Overall confidence (0.0-1.0) |
recommendation | string | Suggested action |
processing_time_ms | number | Processing time |
Category Object
| Field | Type | Description |
|---|---|---|
category | string | Category name |
level | string | low, medium, high, critical |
confidence | number | Confidence for this category |
matched_phrases | array | Phrases that triggered detection |
recommendation | string | Category-specific recommendation |
Threshold Levels
| Threshold | Description | Use Case |
|---|---|---|
low | Very sensitive, catches more | User-facing content |
medium | Balanced detection | General use |
high | Less sensitive, fewer false positives | Internal tools |
Harm Levels
| Level | Description | Recommended Action |
|---|---|---|
low | Minor concern | Log and monitor |
medium | Moderate concern | Flag for review |
high | Serious concern | Block content |
critical | Immediate risk | Block and alert |
Integration Example
def moderate_content(text: str) -> dict:
"""Check content before displaying to users."""
response = requests.post(
"https://api.glyphnet.io/v1/harm/detect",
headers={"X-API-Key": os.environ["GLYPHNET_API_KEY"]},
json={"text": text, "threshold": "low"}
)
result = response.json()
if result["harmful"]:
# Get the highest severity category
max_level = max(result["categories"], key=lambda c:
{"low": 1, "medium": 2, "high": 3, "critical": 4}[c["level"]]
)
if max_level["level"] in ["high", "critical"]:
return {"allowed": False, "reason": result["recommendation"]}
else:
return {"allowed": True, "flagged": True, "warning": result["recommendation"]}
return {"allowed": True, "flagged": False}Error Responses
Same error format as /v1/verify. See Error Codes.