# API Features

## API Features

Our decentralized network delivers affordable, on-demand AI inference compute through simple **OpenAI-compatible HTTP APIs**.

> *\[NOTE] The keys shown in examples are dummy placeholders (`wsk-examplekey`). To obtain production keys use* [*https://app.w.ai/developers/keys*](https://app.w.ai/developers/keys)

***

### Base API URL

```
https://api.w.ai/v1
```

***

### Authorization

Include your API key in the `Authorization` header:

```
Authorization: Bearer wsk-examplekey
```

***

### List Available Models

Retrieve metadata for all available models on the network.

```bash
curl -X GET 'https://api.w.ai/v1/models'
```

**Response includes:**

* Model ID, name, and description
* Input/output modalities (`text`, `image`, `video`)
* Context length (for LLMs)
* Quantization level (`4bit`, `8bit`, `fp16`)
* Supported sampling parameters

***

### Text Chat Completions

Create text-based chat completions using LLM models like Llama, Mistral, Gemma, DeepSeek, and more.

#### Endpoint

```
POST /v1/chat/completions
```

#### Basic Example

```bash
curl -X POST 'https://api.w.ai/v1/chat/completions' \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer wsk-examplekey' \
  --data '{
    "model": "llama-3.2-1b-4bit",
    "messages": [
      { "role": "user", "content": "Hello! Who is davinci?" }
    ],
    "max_tokens": 150,
    "stream": false
  }'
```

#### Request Parameters

| Parameter           | Type          | Required | Description                                                            |
| ------------------- | ------------- | -------- | ---------------------------------------------------------------------- |
| `model`             | string        | ✅        | Model ID (e.g., `llama-3.2-1b-4bit`)                                   |
| `messages`          | array         | ✅        | Array of message objects                                               |
| `max_tokens`        | integer       |          | Maximum tokens to generate (min: 1)                                    |
| `temperature`       | number        |          | Sampling temperature (0-2, default: 1.0)                               |
| `top_p`             | number        |          | Nucleus sampling (0-1)                                                 |
| `frequency_penalty` | number        |          | Frequency penalty (-2 to 2)                                            |
| `presence_penalty`  | number        |          | Presence penalty (-2 to 2)                                             |
| `stream`            | boolean       |          | Enable streaming responses                                             |
| `response_format`   | object        |          | Output format (e.g., `{"type": "json_object"}`)                        |
| `tools`             | array         |          | Function definitions for tool calling                                  |
| `tool_choice`       | string/object |          | Tool selection mode (`none`, `auto`, `required`, or specific function) |

#### Message Roles

* `system` — System instructions
* `user` — User messages
* `assistant` — Assistant responses
* `tool` — Tool/function call results

***

### Vision-Language Chat (VLM)

Send images along with text prompts for multimodal understanding.

#### Example with Image URL

```bash
curl -X POST 'https://api.w.ai/v1/chat/completions' \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer wsk-examplekey' \
  --data '{
    "model": "gemma-3-27b-4bit",
    "messages": [
      {
        "role": "user",
        "content": [
          { "type": "image_url", "image_url": { "url": "http://images.cocodataset.org/val2017/000000039769.jpg" } },
          { "type": "text", "text": "Describe the contents of this image." }
        ]
      }
    ],
    "max_tokens": 150,
    "stream": false
  }'
```

#### Image Content Object

| Field              | Type   | Description                                     |
| ------------------ | ------ | ----------------------------------------------- |
| `type`             | string | Must be `image_url`                             |
| `image_url.url`    | string | URL or base64-encoded image                     |
| `image_url.detail` | string | Resolution: `low`, `high`, or `auto` (optional) |

***

### Tool Calling (Function Calling)

Enable models to call external functions/tools.

#### Example with Tools

```bash
curl -X POST 'https://api.w.ai/v1/chat/completions' \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer wsk-examplekey' \
  --data '{
    "model": "llama-3.3-70b-4bit",
    "messages": [
      { "role": "user", "content": "What is the weather like in San Francisco?" }
    ],
    "tools": [
      {
        "type": "function",
        "function": {
          "name": "get_weather",
          "description": "Get current weather for a location",
          "parameters": {
            "type": "object",
            "properties": {
              "location": { "type": "string", "description": "City name" },
              "unit": { "type": "string", "enum": ["celsius", "fahrenheit"] }
            },
            "required": ["location"]
          }
        }
      }
    ],
    "tool_choice": "auto"
  }'
```

#### Tool Choice Options

| Value                                               | Description                      |
| --------------------------------------------------- | -------------------------------- |
| `none`                                              | Disable tool calling             |
| `auto`                                              | Model decides when to call tools |
| `required`                                          | Force the model to call a tool   |
| `{"type": "function", "function": {"name": "..."}}` | Call a specific function         |

#### Handling Tool Call Responses

When the model calls a tool, respond with the result:

```json
{
  "model": "llama-3.3-70b-4bit",
  "messages": [
    { "role": "user", "content": "What is the weather like in San Francisco?" },
    {
      "role": "assistant",
      "content": null,
      "tool_calls": [
        {
          "id": "call_abc123",
          "type": "function",
          "function": { "name": "get_weather", "arguments": "{\"location\": \"San Francisco\"}" }
        }
      ]
    },
    {
      "role": "tool",
      "tool_call_id": "call_abc123",
      "content": "{\"temperature\": 68, \"condition\": \"sunny\"}"
    }
  ]
}
```

***

### Image Generation

Generate images from text prompts using models like FLUX and SDXL.

#### Endpoint

```
POST /v1/images/generations
```

#### Example

```bash
curl -X POST 'https://api.w.ai/v1/images/generations' \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer wsk-examplekey' \
  --data '{
    "model": "flux-1-dev",
    "prompt": "A green photorealistic hand in the matrix holding a sign that says W.ai, everything should be binary 1s and 0s",
    "size": "1024x1024"
  }'
```

#### Request Parameters

| Parameter         | Type    | Required | Default       | Description                                      |
| ----------------- | ------- | -------- | ------------- | ------------------------------------------------ |
| `model`           | string  | ✅        |               | Model ID (e.g., `flux-1-dev`, `sdxl`)            |
| `prompt`          | string  | ✅        |               | Text description of the image                    |
| `size`            | string  |          | `1024x1024`   | Output dimensions (e.g., `512x512`, `1024x1024`) |
| `quality`         | string  |          |               | Quality level: `low`, `medium`, `high`, `hd`     |
| `seed`            | integer |          | Random        | Seed for reproducible generation                 |
| `steps`           | integer |          | Model default | Denoising steps (1-100)                          |
| `guidance_scale`  | number  |          | Model default | Prompt adherence (1-20)                          |
| `negative_prompt` | string  |          |               | What to exclude from the image                   |
| `stream`          | boolean |          | `false`       | Enable streaming for progress updates            |

***

### Image Editing

Edit existing images using text prompts with FLUX Kontext models.

#### Endpoint

```
POST /v1/images/edits
```

#### Example

```bash
curl -X POST 'https://api.w.ai/v1/images/edits' \
  --header 'Authorization: Bearer wsk-examplekey' \
  --header 'Content-Type: multipart/form-data' \
  --form 'model=flux-1-kontext-dev' \
  --form 'prompt=Convert to pencil sketch with natural graphite lines, cross-hatching, and visible paper texture' \
  --form 'image=@/path/to/your/image.jpg' \
  --form 'guidance_scale=2.5'
```

#### Request Parameters (multipart/form-data)

| Parameter         | Type    | Required | Default       | Description                                      |
| ----------------- | ------- | -------- | ------------- | ------------------------------------------------ |
| `model`           | string  | ✅        |               | Model ID (e.g., `flux-1-kontext-dev`)            |
| `prompt`          | string  | ✅        |               | Edit instructions                                |
| `image`           | file    | ✅        |               | Source image file(s). Multiple images supported. |
| `size`            | string  |          | `1024x1024`   | Output dimensions                                |
| `negative_prompt` | string  |          |               | What to avoid in the edit                        |
| `seed`            | integer |          | Random        | Seed for reproducible results                    |
| `steps`           | integer |          | Model default | Denoising steps                                  |
| `guidance_scale`  | number  |          |               | Prompt adherence strength                        |
| `quality`         | string  |          |               | Quality level                                    |
| `stream`          | boolean |          | `false`       | Enable streaming                                 |

***

### Object Detection & Segmentation

Run object detection (YOLO11n) or image/video segmentation (SAM2) on images and videos.

#### Endpoint

```
POST /v1/predictions
```

#### YOLO11n Object Detection Example

Detect objects in an image with bounding boxes and class labels:

```bash
curl -X POST 'https://api.w.ai/v1/predictions' \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer wsk-examplekey' \
  --data '{
    "model": "yolo11n",
    "input": {
      "image": "https://images.cocodataset.org/val2017/000000039769.jpg",
      "conf": 0.25,
      "iou": 0.45,
      "imgsz": 640,
      "return_json": true
    }
  }'
```

#### SAM2 Segmentation Example

Segment objects using point prompts:

```bash
curl -X POST 'https://api.w.ai/v1/predictions' \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer wsk-examplekey' \
  --data '{
    "model": "sam2",
    "input": {
      "image": "https://images.cocodataset.org/val2017/000000039769.jpg",
      "points": [
        { "x": 500, "y": 375, "label": 1 }
      ],
      "mask_type": "highlighted",
      "return_json": false
    }
  }'
```

#### SAM2 Video Segmentation Example

Track and segment objects across video frames:

```bash
curl -X POST 'https://api.w.ai/v1/predictions' \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer wsk-examplekey' \
  --data '{
    "model": "sam2",
    "input": {
      "video": "https://example.com/video.mp4",
      "points": [
        { "x": 200, "y": 150, "label": 1 }
      ],
      "video_fps": 25,
      "output_frame_interval": 1,
      "annotation_type": "mask"
    }
  }'
```

#### Request Parameters

| Parameter     | Type   | Required | Default | Description                                            |
| ------------- | ------ | -------- | ------- | ------------------------------------------------------ |
| `model`       | string | ✅        |         | Model ID (`yolo11n` or `sam2`)                         |
| `input.image` | string | ✅\*      |         | Image URL or base64. \*Either image or video required. |
| `input.video` | string | ✅\*      |         | Video URL or base64. \*Either image or video required. |

**YOLO11n Parameters:**

| Parameter           | Type    | Default | Description                               |
| ------------------- | ------- | ------- | ----------------------------------------- |
| `input.conf`        | number  | `0.25`  | Confidence threshold (0-1)                |
| `input.iou`         | number  | `0.45`  | IOU threshold for NMS (0-1)               |
| `input.imgsz`       | integer | `640`   | Input image size (320-1280)               |
| `input.return_json` | boolean | `true`  | Return JSON detections or annotated image |

**SAM2 Parameters:**

| Parameter                      | Type    | Default       | Description                                                             |
| ------------------------------ | ------- | ------------- | ----------------------------------------------------------------------- |
| `input.points`                 | array   |               | Point prompts: `[{x, y, label}]` where label 1=foreground, 0=background |
| `input.boxes`                  | array   |               | Box prompts: `[{x1, y1, x2, y2}]`                                       |
| `input.mask_type`              | string  | `highlighted` | Mask visualization style                                                |
| `input.annotation_type`        | string  | `mask`        | Output type: `mask`, `contour`, etc.                                    |
| `input.points_per_side`        | integer | `32`          | Auto-mask grid density (8-128)                                          |
| `input.pred_iou_thresh`        | number  | `0.88`        | Predicted IOU threshold (0-1)                                           |
| `input.stability_score_thresh` | number  | `0.95`        | Mask stability threshold (0-1)                                          |
| `input.use_m2m`                | boolean | `true`        | Enable mask-to-mask refinement                                          |
| `input.multiview`              | boolean | `false`       | Multi-view consistency                                                  |

**Video-specific Parameters:**

| Parameter                     | Type    | Default | Description                    |
| ----------------------------- | ------- | ------- | ------------------------------ |
| `input.video_fps`             | integer | `25`    | Output video frame rate (1-60) |
| `input.output_frame_interval` | integer | `1`     | Process every Nth frame (1-10) |
| `input.output_format`         | string  | `webp`  | Output format for frames       |
| `input.output_quality`        | integer | `80`    | Output quality (1-100)         |

***

### Video Generation (Audio)

Generate audio for video clips using video-to-audio models.

#### Endpoint

```
POST /v1/videos/generations
```

#### Example

```bash
curl -X POST 'https://api.w.ai/v1/videos/generations' \
  --header 'Authorization: Bearer wsk-examplekey' \
  --header 'Content-Type: multipart/form-data' \
  --form 'model=mmaudio' \
  --form 'video=@/path/to/video.mp4' \
  --form 'prompt=upbeat background music with gentle piano'
```

#### Request Parameters (multipart/form-data)

| Parameter         | Type    | Required | Description              |
| ----------------- | ------- | -------- | ------------------------ |
| `model`           | string  | ✅        | Model ID                 |
| `video`           | file    | ✅        | Source video file        |
| `prompt`          | string  |          | Audio description        |
| `negative_prompt` | string  |          | What to avoid            |
| `seed`            | integer |          | Seed for reproducibility |
| `duration`        | number  |          | Target duration          |
| `num_steps`       | integer |          | Generation steps         |
| `cfg_strength`    | number  |          | Guidance strength        |
| `stream`          | boolean |          | Enable streaming         |

***

### Responses API (Items-Based)

Alternative API format based on the [Open Responses](https://www.openresponses.org/) specification with structured Items.

#### Endpoint

```
POST /v1/responses
```

#### Example

```bash
curl -X POST 'https://api.w.ai/v1/responses' \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer wsk-examplekey' \
  --data '{
    "model": "llama-3.2-1b-4bit",
    "input": [
      {
        "type": "message",
        "role": "user",
        "content": "Explain quantum computing in simple terms."
      }
    ],
    "max_output_tokens": 500
  }'
```

#### Request Parameters

| Parameter             | Type          | Required | Description                                         |
| --------------------- | ------------- | -------- | --------------------------------------------------- |
| `model`               | string        | ✅        | Model ID                                            |
| `input`               | array         | ✅        | Array of Item objects                               |
| `instructions`        | string        |          | System-level instructions                           |
| `tools`               | array         |          | Function tool definitions                           |
| `tool_choice`         | string/object |          | Tool selection mode                                 |
| `stream`              | boolean       |          | Enable streaming                                    |
| `temperature`         | number        |          | Sampling temperature (0-2)                          |
| `max_output_tokens`   | integer       |          | Maximum output tokens                               |
| `top_p`               | number        |          | Nucleus sampling (0-1)                              |
| `frequency_penalty`   | number        |          | Frequency penalty (-2 to 2)                         |
| `presence_penalty`    | number        |          | Presence penalty (-2 to 2)                          |
| `parallel_tool_calls` | boolean       |          | Allow parallel tool execution                       |
| `text.format.type`    | string        |          | Output format: `text`, `json_object`, `json_schema` |

#### Item Types

* **User Message**: `{ "type": "message", "role": "user", "content": "..." }`
* **System Message**: `{ "type": "message", "role": "system", "content": "..." }`
* **Developer Message**: `{ "type": "message", "role": "developer", "content": "..." }`
* **Assistant Message**: `{ "type": "message", "role": "assistant", "content": "..." }`
* **Function Call**: `{ "type": "function_call", "call_id": "...", "name": "...", "arguments": "..." }`
* **Function Output**: `{ "type": "function_call_output", "call_id": "...", "output": "..." }`

#### Content Types

For multimodal inputs, use content arrays:

```json
{
  "type": "message",
  "role": "user",
  "content": [
    { "type": "input_text", "text": "What's in this image?" },
    { "type": "input_image", "image_url": "https://example.com/image.jpg" },
    { "type": "input_file", "file_data": "base64...", "filename": "doc.pdf" }
  ]
}
```

#### Structured Output (JSON Schema)

```json
{
  "model": "llama-3.3-70b-4bit",
  "input": [{ "type": "message", "role": "user", "content": "List 3 colors" }],
  "text": {
    "format": {
      "type": "json_schema",
      "json_schema": {
        "name": "color_list",
        "schema": {
          "type": "object",
          "properties": {
            "colors": { "type": "array", "items": { "type": "string" } }
          }
        },
        "strict": true
      }
    }
  }
}
```

***

### Streaming Responses

Enable real-time streaming by setting `stream: true`. Responses are sent as Server-Sent Events (SSE).

#### Example

```bash
curl -X POST 'https://api.w.ai/v1/chat/completions' \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer wsk-examplekey' \
  --data '{
    "model": "llama-3.2-1b-4bit",
    "messages": [{ "role": "user", "content": "Write a haiku about AI" }],
    "stream": true
  }'
```

#### Response Format

```
data: {"id":"...","object":"chat.completion.chunk","choices":[{"delta":{"content":"token"},"index":0}]}

data: [DONE]
```

***

### Error Handling

API errors follow the OpenAI error format:

```json
{
  "error": {
    "message": "Error description",
    "type": "error_type",
    "code": "error_code"
  }
}
```

#### Common Error Codes

| Code  | Description                     |
| ----- | ------------------------------- |
| `401` | Invalid or missing API key      |
| `400` | Invalid request parameters      |
| `429` | Rate limit exceeded             |
| `503` | Service temporarily unavailable |

***

### Rate Limits

Rate limits vary by endpoint and account tier. Contact <support@w.ai> for higher limits.

***

### SDK Compatibility

The W\.ai API is **OpenAI SDK compatible**. Use your preferred OpenAI client library:

#### Python

```python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.w.ai/v1",
    api_key="wsk-examplekey"
)

response = client.chat.completions.create(
    model="llama-3.2-1b-4bit",
    messages=[{"role": "user", "content": "Hello!"}]
)
```

#### JavaScript/TypeScript

```typescript
import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'https://api.w.ai/v1',
  apiKey: 'wsk-examplekey'
});

const response = await client.chat.completions.create({
  model: 'llama-3.2-1b-4bit',
  messages: [{ role: 'user', content: 'Hello!' }]
});
```
