KINETIC Score™ API Documentation
Technical reference for the Adaptive KINETIC Engine scoring system
API Endpoint: /api/score
Request Format
Method: POST
Content-Type: application/json
Required Parameters:
creatorId
(string) - Unique creator identifierhorizonDays
(number) - Forecast horizon (7, 14, or 30)v
(number) - Views countq
(number) - Quality score (0-100)e
(number) - Engagement ratem
(number) - Monetization scores
(number) - Share ratefeatures
(object) - Additional context (baseline, etc.)
Example Request:
{
"creatorId": "creator_123",
"horizonDays": 14,
"v": 50000,
"q": 85,
"e": 0.045,
"m": 120.50,
"s": 0.012,
"features": { "baseline": 1.0 }
}
Response Format
Response Fields:
kineticScore
(number) - Computed KINETIC score (0-100)forecast
(object) - Predictions for d7, d14, d30 with confidence intervalsdrivers
(array) - Top contributing factors ranked by weightmodelVersion
(string) - Model identifierusedFallback
(boolean) - Whether fallback heuristics were usedfromCache
(boolean) - Whether response came from cache
Example Response:
{
"kineticScore": 78.5,
"forecast": {
"d7": 82.4,
"d14": 86.3,
"d30": 90.2,
"ciLow": 77.7,
"ciHigh": 95.0
},
"drivers": [
{ "name": "Views", "weight": 0.35 },
{ "name": "Quality", "weight": 0.25 },
{ "name": "Engagement", "weight": 0.20 }
],
"modelVersion": "adaptive-v2.1",
"usedFallback": false,
"fromCache": false
}
Timeout & Fallback Behavior
The KINETIC scoring system uses a hybrid architecture that ensures 100% uptime through graceful degradation.
External Model Service
- Timeout: 3000ms (configurable via
MODEL_TIMEOUT_MS
) - Endpoint:
{MODEL_URL}/predict
- Success: Returns
usedFallback: false
Deterministic Fallback
When the model service is unreachable or times out, the system uses heuristic calculations:
Fallback Formula:
score_raw = 0.35*v + 0.25*q + 0.20*e + 0.15*m + 0.05*s
KINETIC = 100 * (1 / (1 + exp(-score_raw)))
Forecast:
d7 = 1.05 * baseline
d14 = 1.10 * baseline
d30 = 1.15 * baseline
ciLow = 0.9 * d14
ciHigh = 1.1 * d14
Returns usedFallback: true
and modelVersion: "heuristic-v1"
Guarantee: The endpoint always responds within 3.1 seconds, even if the model service is completely offline.
Metrics Settlement & Accuracy Tracking
Every prediction is logged to the model_metrics
table for performance evaluation.
Prediction Logging
- When: Every call to
/api/score
- Stored: creator_id, model_version, horizon_days, predicted_prob, baseline
- Predicted Probability: kineticScore / 100
Outcome Settlement
After the forecast horizon passes (7, 14, or 30 days), outcomes are recorded:
- Brier Score: Measures calibration accuracy (lower is better)
- Log Loss: Penalizes confident wrong predictions
- Matured At: Timestamp when outcome was recorded
Accuracy Formulas:
Brier Score = (predicted_prob - outcome)²
Log Loss = -[outcome*log(p) + (1-outcome)*log(1-p)]
Where to View Accuracy
Model performance metrics are available in:
- Database: Query
model_metrics
table directly - Future Dashboard: Admin panel for model performance (coming soon)
- Aggregations: Group by model_version to compare heuristic vs. ML performance
Cohort Priors & K-Anonymity
To protect creator privacy and improve cold-start predictions, the system uses cohort-based priors with k-anonymous thresholds.
How Cohort Priors Work
- Cohort Definition: Creators are grouped by platform, follower tier, and content category
- Prior Calculation: Average KINETIC scores and growth rates from similar creators
- Bayesian Update: New creator predictions start with cohort prior, then adapt with data
- Confidence Weighting: More data = less reliance on prior
K-Anonymous Threshold
To prevent re-identification, cohorts must contain at least k=10 creators:
- Minimum Size: Cohorts with <10 creators are merged with broader categories
- Privacy Guarantee: No individual creator can be identified from cohort statistics
- Fallback: If no k-anonymous cohort exists, use platform-wide prior
Note: Cohort priors are computed nightly via the /api/kinetic/retrain
cron job.
Health Check & Model Version
Use the /api/health
endpoint to monitor model service availability.
Health Endpoint
Endpoint: GET /api/health
Timeout: 1000ms
Always Returns: 200 OK (even if model is down)
Example Response (Model Online):
{
"modelReachable": true,
"version": "adaptive-v2.1"
}
Example Response (Model Offline):
{
"modelReachable": false,
"version": null
}
Monitoring Best Practices
- Frequency: Poll every 60 seconds
- Alert Threshold: Trigger alert if
modelReachable: false
for> 5 minutes - Version Tracking: Log version changes to detect model updates
- Fallback Monitoring: Track
usedFallback
rate in scoring responses
Deployment Checklist
Complete these steps to verify your KINETIC deployment is production-ready:
1. Database Migrations
Run 002_add_insight_data_column.sql
Adds insight_data
JSONB column to ai_insights_log
Run 003_model_metrics.sql
Creates model_metrics
table for accuracy tracking
2. Environment Variables
MODEL_URL
URL of your ML model service (e.g., https://model.example.com)
MODEL_VERSION
Model identifier (e.g., adaptive-v2.1) - defaults to heuristic-v1
MODEL_TIMEOUT_MS
Timeout in milliseconds (default: 3000) - adjust based on model latency
KV_REST_API_URL
Upstash Redis URL for caching (auto-configured)
KV_REST_API_TOKEN
Upstash Redis token for caching (auto-configured)
3. Cron Jobs
Social Account Sync (Daily 2 AM UTC)
Endpoint: /api/cron/sync-social-accounts
KINETIC Model Retrain (Daily 4 AM UTC)
Endpoint: /api/kinetic/retrain
4. Verification Tests
Test /api/health endpoint
Should return 200 with modelReachable status
Test /api/score with sample data
Verify response includes kineticScore, forecast, and drivers
Verify cache behavior
Second request within 5 minutes should return fromCache: true
Check model_metrics table
Confirm predictions are being logged after /api/score calls
Test fallback behavior
Set invalid MODEL_URL and verify usedFallback: true response
Pro Tip: Run the seed script scripts/seed-insight.ts
to populate test data for development.
Ready to Deploy?
Complete the checklist above and start leveraging the Adaptive KINETIC Engine in production.