Network Intel API
Overview
The Datagram Network Intel API provides access to language models through a chat completions endpoint. This API allows you to interact with various AI models in a conversational format.
Base URL: https://intel.api.datagram.network
Authentication
All API requests require authentication using an API key passed in the Authorization header.
Authorization:Bearer YOUR_API_KEY
Chat Completions
Endpoint
POST /api/v1/chat/completions
URL
https://intel.api.datagram.network/api/v1/chat/completions
Creates a chat completion response for the given conversation.
Headers
Authorization
Yes
Bearer token for API authentication
Bearer sk_live_abc123...xyz456
Content-Type
Yes
Must be application/json
application/json; charset=utf-8
sec-ch-ua-platform
No
Client platform information
"Windows"
Request Body
model
string
Yes
ID of the model to use (e.g., "llama3.2:1b")
"llama3.2:1b"
messages
array
Yes
Array of message objects representing the conversation
[{"role": "user", "content": "Explain quantum computing"}]
stream
boolean
No
Whether to stream back partial progress (default: false)
true
Message Object
role
string
Yes
The role of the message author (system, user, assistant)
"user"
content
array
Yes
Array of content objects containing the message text
[{"type": "text", "text": "Explain AI safety"}]
Content Object
type
string
Yes
Type of content (e.g., "text")
"text"
text
string
Yes
The actual text content
"Explain quantum entanglement"
Examples
Basic Chat Completion
curl 'https://intel.api.datagram.network/api/v1/chat/completions' \
-H 'Authorization: Bearer YOUR_API_KEY' \
-H 'Content-Type: application/json' \
--data-raw '{
"model": "llama3.2:1b",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What is the capital of France?"
}
]
}
],
"stream": false
}'
Chat with System Message
curl 'https://intel.api.datagram.network/api/v1/chat/completions' \
-H 'Authorization: Bearer YOUR_API_KEY' \
-H 'Content-Type: application/json' \
--data-raw '{
"model": "llama3.2:1b",
"messages": [
{
"role": "system",
"content": [
{
"type": "text",
"text": "You are a helpful assistant that speaks like a pirate."
}
]
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Tell me about the weather"
}
]
}
],
"stream": false
}'
Streaming Response
curl 'https://intel.api.datagram.network/api/v1/chat/completions' \
-H 'Authorization: Bearer YOUR_API_KEY' \
-H 'Content-Type: application/json' \
--data-raw '{
"model": "llama3.2:1b",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Write a short story about a robot"
}
]
}
],
"stream": true
}'
Multi-turn Conversation
curl 'https://intel.api.datagram.network/api/v1/chat/completions' \
-H 'Authorization: Bearer YOUR_API_KEY' \
-H 'Content-Type: application/json' \
--data-raw '{
"model": "llama3.2:1b",
"messages": [
{
"role": "system",
"content": [
{
"type": "text",
"text": "You are a helpful coding assistant."
}
]
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "How do I create a Python function?"
}
]
},
{
"role": "assistant",
"content": [
{
"type": "text",
"text": "To create a Python function, use the def keyword followed by the function name and parameters in parentheses."
}
]
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Can you show me an example?"
}
]
}
],
"stream": false
}'
Response Format
Non-streaming Response
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "llama3.2:1b",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 9,
"completion_tokens": 12,
"total_tokens": 21
}
}
Streaming Response
When stream: true is set, the server sends data in Server-Sent Events (SSE) format:
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"llama3.2:1b","choices":[{"index":0,"delta":{"role":"assistant","content":"The"},"finish_reason":null}]}
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"llama3.2:1b","choices":[{"index":0,"delta":{"content":" capital"},"finish_reason":null}]}
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"llama3.2:1b","choices":[{"index":0,"delta":{"content":" of"},"finish_reason":null}]}
data: [DONE]
Available Models
Based on the example, the following model is available:
llama3.2:1b
- Llama 3.2 1B parameter model
Note: Contact the API provider for a complete list of available models.
Error Handling
The API returns standard HTTP status codes:
200
- Success400
- Bad Request (invalid parameters)401
- Unauthorized (invalid API key)429
- Rate Limit Exceeded500
- Internal Server Error
Example Error Response
{
"error": {
"message": "Invalid API key provided",
"type": "invalid_request_error",
"code": "invalid_api_key"
}
}
Rate Limits
Rate limits may apply depending on your API plan. Check the response headers for rate limit info:
X-RateLimit-Limit
- Max requests per time windowX-RateLimit-Remaining
- Remaining requests in current windowX-RateLimit-Reset
- Time when limit resets
Best Practices
Include System Messages: Use them to define assistant behavior.
Handle Streaming: Implement proper error handling.
Token Management: Monitor usage.
Conversation Context: Maintain full message history.
Error Handling: Gracefully handle failures.
Last updated