Network Intel API
Overview
The Datagram Network Intel API provides access to language models through a chat completions endpoint. This API allows you to interact with various AI models in a conversational format.
Base URL: https://intel.api.datagram.network
Authentication
All API requests require authentication using an API key passed in the Authorization header.
Authorization:Bearer YOUR_API_KEY
Chat Completions
Endpoint
POST /api/v1/chat/completionsURL
https://intel.api.datagram.network/api/v1/chat/completionsCreates a chat completion response for the given conversation.
Headers
Authorization
Yes
Bearer token for API authentication
Bearer sk_live_abc123...xyz456
Content-Type
Yes
Must be application/json
application/json; charset=utf-8
sec-ch-ua-platform
No
Client platform information
"Windows"
Request Body
model
string
Yes
ID of the model to use (e.g., "llama3.2:1b")
"llama3.2:1b"
messages
array
Yes
Array of message objects representing the conversation
[{"role": "user", "content": "Explain quantum computing"}]
stream
boolean
No
Whether to stream back partial progress (default: false)
true
Message Object
role
string
Yes
The role of the message author (system, user, assistant)
"user"
content
array
Yes
Array of content objects containing the message text
[{"type": "text", "text": "Explain AI safety"}]
Content Object
type
string
Yes
Type of content (e.g., "text")
"text"
text
string
Yes
The actual text content
"Explain quantum entanglement"
Examples
Basic Chat Completion
Chat with System Message
Streaming Response
Multi-turn Conversation
Response Format
Non-streaming Response
Streaming Response
When stream: true is set, the server sends data in Server-Sent Events (SSE) format:
Available Models
Based on the example, the following model is available:
llama3.2:1b- Llama 3.2 1B parameter model
Note: Contact the API provider for a complete list of available models.
Error Handling
The API returns standard HTTP status codes:
200- Success400- Bad Request (invalid parameters)401- Unauthorized (invalid API key)429- Rate Limit Exceeded500- Internal Server Error
Example Error Response
Rate Limits
Rate limits may apply depending on your API plan. Check the response headers for rate limit info:
X-RateLimit-Limit- Max requests per time windowX-RateLimit-Remaining- Remaining requests in current windowX-RateLimit-Reset- Time when limit resets
Best Practices
Include System Messages: Use them to define assistant behavior.
Handle Streaming: Implement proper error handling.
Token Management: Monitor usage.
Conversation Context: Maintain full message history.
Error Handling: Gracefully handle failures.
Last updated
