Rate Limiting
Understanding and handling GlyphNet's rate limits.
Rate Limits by Plan
| Plan | Requests/Minute | Monthly Limit |
|---|---|---|
| Free | 60 | 1,000 |
| Starter | 200 | 50,000 |
| Professional | 500 | 500,000 |
| Enterprise | 2,000 | 10,000,000 |
How Rate Limiting Works
Per-Minute Limits
Requests are counted in 60-second sliding windows. If you exceed your limit:
{
"error": "rate_limit_exceeded",
"message": "You have exceeded your rate limit of 200 requests per minute.",
"details": {
"limit": 200,
"retry_after": 45
}
}Monthly Limits
Total requests are counted per billing cycle. When exceeded:
{
"error": "quota_exceeded",
"message": "You have exceeded your monthly verification limit.",
"details": {
"current_usage": 50000,
"monthly_limit": 50000,
"resets_at": "2024-02-01T00:00:00Z"
}
}Response Headers
Every response includes rate limit headers:
X-RateLimit-Limit: 200
X-RateLimit-Remaining: 195
X-RateLimit-Reset: 1705765200| Header | Description |
|---|---|
X-RateLimit-Limit | Your per-minute limit |
X-RateLimit-Remaining | Requests remaining this minute |
X-RateLimit-Reset | Unix timestamp when limit resets |
Handling Rate Limits
Python with Retry Logic
import requests
import time
from typing import Optional
class GlyphNetClient:
def __init__(self, api_key: str, max_retries: int = 3):
self.api_key = api_key
self.max_retries = max_retries
self.base_url = "https://api.glyphnet.io"
def verify(self, text: str, mode: str = "flagging") -> dict:
for attempt in range(self.max_retries):
response = requests.post(
f"{self.base_url}/v1/verify",
headers={"X-API-Key": self.api_key},
json={"text": text, "mode": mode}
)
if response.status_code == 200:
return response.json()
if response.status_code == 429:
retry_after = int(response.headers.get("Retry-After", 60))
print(f"Rate limited. Waiting {retry_after}s...")
time.sleep(retry_after)
continue
response.raise_for_status()
raise Exception("Max retries exceeded")JavaScript with Exponential Backoff
async function verifyWithRetry(text, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch('https://api.glyphnet.io/v1/verify', {
method: 'POST',
headers: {
'X-API-Key': API_KEY,
'Content-Type': 'application/json'
},
body: JSON.stringify({ text })
});
if (response.ok) {
return response.json();
}
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '60');
const backoff = Math.min(retryAfter, Math.pow(2, attempt) * 1000);
console.log(`Rate limited. Waiting ${backoff}ms...`);
await new Promise(resolve => setTimeout(resolve, backoff));
continue;
}
throw new Error(`API error: ${response.status}`);
}
throw new Error('Max retries exceeded');
}Best Practices
1. Monitor Your Usage
Check remaining requests before heavy operations:
def check_rate_limit():
response = requests.get(
"https://api.glyphnet.io/v1/usage",
headers={"X-API-Key": API_KEY}
)
usage = response.json()
remaining_minute = usage["rate_limit"]["requests_per_minute"] - usage["rate_limit"]["current_minute_usage"]
remaining_month = usage["remaining"]
return {
"minute": remaining_minute,
"month": remaining_month
}2. Implement Request Queuing
For high-volume applications:
import asyncio
from collections import deque
class RateLimitedQueue:
def __init__(self, requests_per_minute: int):
self.rate = requests_per_minute
self.queue = deque()
self.last_request = 0
async def add(self, text: str):
self.queue.append(text)
await self.process()
async def process(self):
while self.queue:
# Wait if needed
elapsed = time.time() - self.last_request
if elapsed < 60 / self.rate:
await asyncio.sleep(60 / self.rate - elapsed)
text = self.queue.popleft()
self.last_request = time.time()
# Make request
result = await verify_async(text)
yield result3. Batch When Possible
Instead of many small requests, batch text together:
# Bad: Many small requests
for sentence in sentences:
result = client.verify(sentence) # Uses 1 request per sentence
# Good: One batched request
full_text = " ".join(sentences)
result = client.verify(full_text) # Uses 1 request for all4. Cache Results
Don't re-verify the same content:
import hashlib
from functools import lru_cache
@lru_cache(maxsize=1000)
def cached_verify(text_hash: str, text: str) -> dict:
return client.verify(text)
def verify_with_cache(text: str) -> dict:
text_hash = hashlib.md5(text.encode()).hexdigest()
return cached_verify(text_hash, text)Upgrading Your Plan
If you regularly hit rate limits, consider upgrading:
- Go to glyphnet.io/dashboard/billing (opens in a new tab)
- Click "Upgrade Plan"
- Select a higher tier
Or contact sales@glyphnet.io for custom limits.
Monitoring Alerts
Set up usage alerts to avoid surprises:
def setup_usage_monitoring():
# Check usage every hour
usage = get_usage()
if usage["percentage_used"] >= 80:
send_alert(f"GlyphNet usage at {usage['percentage_used']}%")
if usage["rate_limit"]["current_minute_usage"] >= usage["rate_limit"]["requests_per_minute"] * 0.9:
send_alert("Approaching per-minute rate limit")Or use webhooks for automatic notifications.