Chat API Tutorial
A developer-focused guide to the WesenAI Chat API, focusing on the OpenAI-compatible endpoint and advanced session management.
This tutorial provides a technically accurate guide to the WesenAI Chat API. Our Chat API is designed to be a drop-in replacement for OpenAI's Chat Completions API, making integration seamless for developers already familiar with that ecosystem.
We will cover the primary workflow for creating both stateless and stateful chat completions.
Core Concept: OpenAI Compatibility
The simplest and most direct way to use our Chat API is through the /v1/chat/completions
endpoint. It accepts the same request schema as OpenAI's API and returns responses in the same format. This allows you to leverage existing libraries like openai-python
and openai-node
.
API Reference
- Base URL:
https://chat.api.wesen.ai
- Authentication:
Authorization: Bearer YOUR_API_KEY
orX-API-Key: YOUR_API_KEY
Part 1: Stateless Chat Completions (Recommended)
This is the standard approach for most use cases. Your application manages the conversation history, sending the relevant list of messages with each new request.
POST /v1/chat/completions
Key Request Body Parameters (OpenAIChatCompletionRequestDto
):
model
(string, required): The model ID to use (e.g.,wesen-chat-v1
).messages
(array, required): A list of message objects, each with arole
(system
,user
, orassistant
) andcontent
.stream
(boolean, optional): Set totrue
to receive a stream of token-by-token updates.- Other standard OpenAI parameters like
temperature
,max_tokens
, etc., are also supported.
Python Example
import openai # Configure the OpenAI client to point to WesenAI's endpoint client = openai.OpenAI( api_key="YOUR_API_KEY", base_url="https://chat.api.wesen.ai/v1" ) # Application manages the conversation history messages = [ {"role": "system", "content": "You are a helpful and concise Amharic assistant."}, {"role": "user", "content": "ሰላም! ስለ ሽሮ ወጥ አሰራር ባጭሩ ልትነግረኝ ትችላለህ?"} ] try: completion = client.chat.completions.create( model="wesen-chat-v1", messages=messages ) response_content = completion.choices[0].message.content print("Assistant:", response_content) # Add the assistant's response to the history for the next turn messages.append({"role": "assistant", "content": response_content}) except openai.APIError as e: print(f"An API error occurred: {e}")
Streaming Responses
For real-time applications, setting stream: true
is essential.
# Continuing from the previous example... print("\n--- Streaming Response ---") try: stream = client.chat.completions.create( model="wesen-chat-v1", messages=messages, stream=True ) full_response = "" for chunk in stream: delta = chunk.choices[0].delta.content or "" full_response += delta print(delta, end="", flush=True) print("\n--- End of Stream ---") except openai.APIError as e: print(f"An API error occurred: {e}")
Part 2: Advanced State Management (Native Endpoints)
For applications that require the server to manage conversation state, our native endpoints offer session-based chat. This can be useful for complex chatbots where managing context on the client-side is cumbersome.
Step 1: Create a Chat Session
First, create a session, which will store the conversation history on the server.
POST /v1/chats
import requests WESEN_API_KEY = "YOUR_API_KEY" CHAT_API_URL = "https://chat.api.wesen.ai/v1" headers = { "Authorization": f"Bearer {WESEN_API_KEY}", "Content-Type": "application/json" } session_payload = { "title": "History of Axum", "systemPrompt": "You are a historian specializing in the Axumite Empire.", "model": "wesen-chat-v1" } create_response = requests.post(f"{CHAT_API_URL}/chats", headers=headers, json=session_payload) chat_id = None if create_response.status_code == 201: chat_id = create_response.json().get("id") print(f"Chat session created with ID: {chat_id}") else: print(f"Error creating session: {create_response.status_code} - {create_response.text}")
Step 2: Send a Message to the Session
Now, send messages using the /v1/chat/{chatId}
endpoint. The server will automatically append the message and the assistant's reply to the session's history.
POST /v1/chat/{chatId}
if chat_id: message_payload = { "role": "user", "content": "What was the primary export of the Axumite Empire?" } message_response = requests.post(f"{CHAT_API_URL}/chat/{chat_id}", headers=headers, json=message_payload) if message_response.status_code == 201: assistant_reply = message_response.json() print("Assistant:", assistant_reply.get("content")) else: print(f"Error sending message: {message_response.status_code} - {message_response.text}")
Step 3: Retrieve Conversation History
You can fetch the entire conversation history at any time.
GET /v1/chat/{chatId}/history
if chat_id: history_response = requests.get(f"{CHAT_API_URL}/chat/{chat_id}/history", headers=headers) if history_response.status_code == 200: history = history_response.json() print("\n--- Conversation History ---") for message in history.get("messages", []): print(f"- {message['role'].title()}: {message['content']}")
This native session management offers a stateful alternative to the standard stateless workflow, catering to different architectural needs.