API Features

API Features

Our decentralized network delivers affordable, on-demand AI inference compute through simple OpenAI-compatible HTTP APIs.

[NOTE] The keys shown in examples are dummy placeholders (wsk-examplekey). To obtain production keys use https://app.w.ai/developers/keysarrow-up-right


Base API URL

https://api.w.ai/v1

Authorization

Include your API key in the Authorization header:

Authorization: Bearer wsk-examplekey

List Available Models

Retrieve metadata for all available models on the network.

curl -X GET 'https://api.w.ai/v1/models'

Response includes:

  • Model ID, name, and description

  • Input/output modalities (text, image, video)

  • Context length (for LLMs)

  • Quantization level (4bit, 8bit, fp16)

  • Supported sampling parameters


Text Chat Completions

Create text-based chat completions using LLM models like Llama, Mistral, Gemma, DeepSeek, and more.

Endpoint

Basic Example

Request Parameters

Parameter
Type
Required
Description

model

string

Model ID (e.g., llama-3.2-1b-4bit)

messages

array

Array of message objects

max_tokens

integer

Maximum tokens to generate (min: 1)

temperature

number

Sampling temperature (0-2, default: 1.0)

top_p

number

Nucleus sampling (0-1)

frequency_penalty

number

Frequency penalty (-2 to 2)

presence_penalty

number

Presence penalty (-2 to 2)

stream

boolean

Enable streaming responses

response_format

object

Output format (e.g., {"type": "json_object"})

tools

array

Function definitions for tool calling

tool_choice

string/object

Tool selection mode (none, auto, required, or specific function)

Message Roles

  • system — System instructions

  • user — User messages

  • assistant — Assistant responses

  • tool — Tool/function call results


Vision-Language Chat (VLM)

Send images along with text prompts for multimodal understanding.

Example with Image URL

Image Content Object

Field
Type
Description

type

string

Must be image_url

image_url.url

string

URL or base64-encoded image

image_url.detail

string

Resolution: low, high, or auto (optional)


Tool Calling (Function Calling)

Enable models to call external functions/tools.

Example with Tools

Tool Choice Options

Value
Description

none

Disable tool calling

auto

Model decides when to call tools

required

Force the model to call a tool

{"type": "function", "function": {"name": "..."}}

Call a specific function

Handling Tool Call Responses

When the model calls a tool, respond with the result:


Image Generation

Generate images from text prompts using models like FLUX and SDXL.

Endpoint

Example

Request Parameters

Parameter
Type
Required
Default
Description

model

string

Model ID (e.g., flux-1-dev, sdxl)

prompt

string

Text description of the image

size

string

1024x1024

Output dimensions (e.g., 512x512, 1024x1024)

quality

string

Quality level: low, medium, high, hd

seed

integer

Random

Seed for reproducible generation

steps

integer

Model default

Denoising steps (1-100)

guidance_scale

number

Model default

Prompt adherence (1-20)

negative_prompt

string

What to exclude from the image

stream

boolean

false

Enable streaming for progress updates


Image Editing

Edit existing images using text prompts with FLUX Kontext models.

Endpoint

Example

Request Parameters (multipart/form-data)

Parameter
Type
Required
Default
Description

model

string

Model ID (e.g., flux-1-kontext-dev)

prompt

string

Edit instructions

image

file

Source image file(s). Multiple images supported.

size

string

1024x1024

Output dimensions

negative_prompt

string

What to avoid in the edit

seed

integer

Random

Seed for reproducible results

steps

integer

Model default

Denoising steps

guidance_scale

number

Prompt adherence strength

quality

string

Quality level

stream

boolean

false

Enable streaming


Object Detection & Segmentation

Run object detection (YOLO11n) or image/video segmentation (SAM2) on images and videos.

Endpoint

YOLO11n Object Detection Example

Detect objects in an image with bounding boxes and class labels:

SAM2 Segmentation Example

Segment objects using point prompts:

SAM2 Video Segmentation Example

Track and segment objects across video frames:

Request Parameters

Parameter
Type
Required
Default
Description

model

string

Model ID (yolo11n or sam2)

input.image

string

✅*

Image URL or base64. *Either image or video required.

input.video

string

✅*

Video URL or base64. *Either image or video required.

YOLO11n Parameters:

Parameter
Type
Default
Description

input.conf

number

0.25

Confidence threshold (0-1)

input.iou

number

0.45

IOU threshold for NMS (0-1)

input.imgsz

integer

640

Input image size (320-1280)

input.return_json

boolean

true

Return JSON detections or annotated image

SAM2 Parameters:

Parameter
Type
Default
Description

input.points

array

Point prompts: [{x, y, label}] where label 1=foreground, 0=background

input.boxes

array

Box prompts: [{x1, y1, x2, y2}]

input.mask_type

string

highlighted

Mask visualization style

input.annotation_type

string

mask

Output type: mask, contour, etc.

input.points_per_side

integer

32

Auto-mask grid density (8-128)

input.pred_iou_thresh

number

0.88

Predicted IOU threshold (0-1)

input.stability_score_thresh

number

0.95

Mask stability threshold (0-1)

input.use_m2m

boolean

true

Enable mask-to-mask refinement

input.multiview

boolean

false

Multi-view consistency

Video-specific Parameters:

Parameter
Type
Default
Description

input.video_fps

integer

25

Output video frame rate (1-60)

input.output_frame_interval

integer

1

Process every Nth frame (1-10)

input.output_format

string

webp

Output format for frames

input.output_quality

integer

80

Output quality (1-100)


Video Generation (Audio)

Generate audio for video clips using video-to-audio models.

Endpoint

Example

Request Parameters (multipart/form-data)

Parameter
Type
Required
Description

model

string

Model ID

video

file

Source video file

prompt

string

Audio description

negative_prompt

string

What to avoid

seed

integer

Seed for reproducibility

duration

number

Target duration

num_steps

integer

Generation steps

cfg_strength

number

Guidance strength

stream

boolean

Enable streaming


Responses API (Items-Based)

Alternative API format based on the Open Responsesarrow-up-right specification with structured Items.

Endpoint

Example

Request Parameters

Parameter
Type
Required
Description

model

string

Model ID

input

array

Array of Item objects

instructions

string

System-level instructions

tools

array

Function tool definitions

tool_choice

string/object

Tool selection mode

stream

boolean

Enable streaming

temperature

number

Sampling temperature (0-2)

max_output_tokens

integer

Maximum output tokens

top_p

number

Nucleus sampling (0-1)

frequency_penalty

number

Frequency penalty (-2 to 2)

presence_penalty

number

Presence penalty (-2 to 2)

parallel_tool_calls

boolean

Allow parallel tool execution

text.format.type

string

Output format: text, json_object, json_schema

Item Types

  • User Message: { "type": "message", "role": "user", "content": "..." }

  • System Message: { "type": "message", "role": "system", "content": "..." }

  • Developer Message: { "type": "message", "role": "developer", "content": "..." }

  • Assistant Message: { "type": "message", "role": "assistant", "content": "..." }

  • Function Call: { "type": "function_call", "call_id": "...", "name": "...", "arguments": "..." }

  • Function Output: { "type": "function_call_output", "call_id": "...", "output": "..." }

Content Types

For multimodal inputs, use content arrays:

Structured Output (JSON Schema)


Streaming Responses

Enable real-time streaming by setting stream: true. Responses are sent as Server-Sent Events (SSE).

Example

Response Format


Error Handling

API errors follow the OpenAI error format:

Common Error Codes

Code
Description

401

Invalid or missing API key

400

Invalid request parameters

429

Rate limit exceeded

503

Service temporarily unavailable


Rate Limits

Rate limits vary by endpoint and account tier. Contact [email protected]envelope for higher limits.


SDK Compatibility

The W.ai API is OpenAI SDK compatible. Use your preferred OpenAI client library:

Python

JavaScript/TypeScript

Last updated