API Reference
POST /v1/harm/detect

POST /v1/harm/detect

Detect potentially harmful content in text.

Endpoint

POST https://api.glyphnet.io/v1/harm/detect

Authentication

Requires API key in X-API-Key header.

Request Body

FieldTypeRequiredDescription
textstringYesText to analyze (max 10,000 characters)
categoriesarrayNoSpecific categories to check
thresholdstringNolow, medium, high. Default: medium

Available Categories

  • self_harm - Self-harm or suicide-related content
  • violence - Violent or threatening content
  • hate_speech - Discriminatory or hateful content
  • illegal - Instructions for illegal activities
  • explicit - Sexually explicit content
  • misinformation - Known false claims

Example Request

import requests
 
response = requests.post(
    "https://api.glyphnet.io/v1/harm/detect",
    headers={
        "X-API-Key": "gn_live_your_key_here",
        "Content-Type": "application/json"
    },
    json={
        "text": "Your text to analyze here...",
        "threshold": "medium"
    }
)
 
result = response.json()
if result["harmful"]:
    print(f"Harmful content detected: {result['categories']}")
else:
    print("Content is safe")

Response

Safe Content

{
  "request_id": "req_harm_abc123",
  "harmful": false,
  "categories": [],
  "confidence": 0.98,
  "processing_time_ms": 23
}

Harmful Content Detected

{
  "request_id": "req_harm_def456",
  "harmful": true,
  "categories": [
    {
      "category": "violence",
      "level": "high",
      "confidence": 0.92,
      "matched_phrases": ["..."],
      "recommendation": "This content contains violent themes and should be blocked."
    }
  ],
  "confidence": 0.92,
  "recommendation": "Block this content from being displayed.",
  "processing_time_ms": 31
}

Response Fields

FieldTypeDescription
request_idstringUnique request identifier
harmfulbooleanTrue if harmful content detected
categoriesarrayDetected harm categories
confidencenumberOverall confidence (0.0-1.0)
recommendationstringSuggested action
processing_time_msnumberProcessing time

Category Object

FieldTypeDescription
categorystringCategory name
levelstringlow, medium, high, critical
confidencenumberConfidence for this category
matched_phrasesarrayPhrases that triggered detection
recommendationstringCategory-specific recommendation

Threshold Levels

ThresholdDescriptionUse Case
lowVery sensitive, catches moreUser-facing content
mediumBalanced detectionGeneral use
highLess sensitive, fewer false positivesInternal tools

Harm Levels

LevelDescriptionRecommended Action
lowMinor concernLog and monitor
mediumModerate concernFlag for review
highSerious concernBlock content
criticalImmediate riskBlock and alert

Integration Example

def moderate_content(text: str) -> dict:
    """Check content before displaying to users."""
    response = requests.post(
        "https://api.glyphnet.io/v1/harm/detect",
        headers={"X-API-Key": os.environ["GLYPHNET_API_KEY"]},
        json={"text": text, "threshold": "low"}
    )
 
    result = response.json()
 
    if result["harmful"]:
        # Get the highest severity category
        max_level = max(result["categories"], key=lambda c:
            {"low": 1, "medium": 2, "high": 3, "critical": 4}[c["level"]]
        )
 
        if max_level["level"] in ["high", "critical"]:
            return {"allowed": False, "reason": result["recommendation"]}
        else:
            return {"allowed": True, "flagged": True, "warning": result["recommendation"]}
 
    return {"allowed": True, "flagged": False}

Error Responses

Same error format as /v1/verify. See Error Codes.