API Documentation
How to use your deployed models for inference.
Authentication
All API requests must be authenticated using a Bearer Token in the `Authorization` header.
Your API Key
For this demo, a static API key is used. In a production environment, you should generate unique keys for each user.
your-super-secret-api-key
Making an Inference Request
To get a prediction, send a `POST` request to your model's unique endpoint URL. You can find this URL on your dashboard.
Endpoint Structure:
POST /api/models/{MODEL_ID}Request Body:
The request body must be a JSON object containing an `inputData` key. The value of `inputData` should be the data your model expects for prediction.
{
"inputData": {
"features": [5.1, 3.5, 1.4, 0.2]
}
}Examples
Here are some examples of how to make a request using `curl` and Python's `requests` library. Remember to replace `{MODEL_ID}` with the actual ID of your model.
cURL
curl -X POST "https://your-app-url/api/models/{MODEL_ID}" \
-H "Authorization: Bearer your-super-secret-api-key" \
-H "Content-Type: application/json" \
-d '{
"inputData": {
"features": [5.1, 3.5, 1.4, 0.2]
}
}'Python
import requests
import json
API_URL = "https://your-app-url/api/models/{MODEL_ID}"
API_KEY = "your-super-secret-api-key"
headers = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}
payload = {
"inputData": {
"features": [5.1, 3.5, 1.4, 0.2]
}
}
response = requests.post(API_URL, headers=headers, data=json.dumps(payload))
if response.status_code == 200:
prediction = response.json()
print("Success:", prediction)
else:
print("Error:", response.status_code, response.text)