w.ai CLI Guide
To get started with the CLI, install it as mentioned in Quick Start
On a desktop environment
If on a device with a desktop environment/a browser, start by logging in:
wai login
And then run using
wai run
On a headless environment
Visit the w.ai dashboard.
Login or create a w.ai account.
Select
API Keys
and create a new key.
You now have a key that you can set on the headless environment using
export W_AI_API_KEY=your key here
And then run normally (in the headless environment) as follows:
wai run
Headless environment key management
You can list generated API keys using
wai key list
And revoke any using
wai revoke <token>
For more info, run
wai key help
Specifying GPUs (NVIDIA only)
If you want to run with a subset of your GPUs, add a -g
flag to the run command followed by a list of the GPU IDs. For example, to run on GPUs 0 and 1, the command would be:
wai run -g 0 1
If no GPU is provided, it will be ran on GPU 0
Docker
For Docker/Podman containers, we provide optimized CLI images for different GPU types:
NVIDIA GPUs with CUDA (Recommended)
For optimal performance on NVIDIA GPUs:
docker run --gpus all \
-v ~/.wombo:/root/.wombo \
-e W_AI_API_KEY=your_key_here \
wdotai/wai:latest-cuda run
The container includes the CUDA 12.4 toolkit. However, NVIDIA drivers must still be installed on the host machine. See nvidia-container-toolkit for more info.
NVIDIA GPUs with Vulkan (Fallback)
If CUDA isn't available or you prefer Vulkan on NVIDIA:
docker run --gpus all \
-v ~/.wombo:/root/.wombo \
-e W_AI_API_KEY=your_key_here \
wdotai/wai:latest run
AMD GPUs with Vulkan
For AMD GPUs:
docker run --device=/dev/dri:/dev/dri \
-v ~/.wombo:/root/.wombo \
-e W_AI_API_KEY=your_key_here \
wdotai/wai:latest run
Using environment files
Instead of passing the API key inline, you can use an environment file for better security:
Create a
.env
file:echo "W_AI_API_KEY=your_key_here" > .env
And specify the .env
file in the docker command
- -e W_AI_API_KEY=your_key_here
+ --env-file .env
Volume mounts
Volume mounting ~/.wombo:/root/.wombo is highly recommended to avoid redownloading dependencies/models each time the container is restarted
Last updated