w.ai CLI Guide

To get started with the CLI, install it as mentioned in Quick Start

On a desktop environment

If on a device with a desktop environment/a browser, start by logging in:

wai login

And then run using

wai run

On a headless environment

  1. Visit the w.ai dashboard.

  2. Login or create a w.ai account.

  3. Select API Keys and create a new key.

You now have a key that you can set on the headless environment using

export W_AI_API_KEY=your key here

And then run normally (in the headless environment) as follows:

wai run

Headless environment key management

You can list generated API keys using

And revoke any using

For more info, run

Specifying GPUs (NVIDIA only)

If you want to run with a subset of your GPUs, add a -g flag to the run command followed by a list of the GPU IDs. For example, to run on GPUs 0 and 1, the command would be:

If no GPU is provided, it will be ran on GPU 0

Docker

For Docker/Podman linux containers, we provide optimized CLI images:

NVIDIA GPUs with CUDA (Recommended)

For optimal performance on NVIDIA GPUs:

NVIDIA drivers must be installed on the host machine to use CUDA. w.ai will fallback to Vulkan otherwise. See nvidia-container-toolkit for more info.

AMD GPUs with Vulkan

For AMD GPUs:

Using environment files

Instead of passing the API key inline, you can use an environment file for better security:

  1. Create a .env file:

And specify the .env file in the docker command

MacOS

M-Series MacOS computers do not support GPU passthrough in containers. As such w.ai on MacOS is unsupported through containers and must run on the root machine.

Volume mounts

Volume mounting `~/.wombo:/root/.wombo` is highly recommended to avoid redownloading dependencies/models each time the container is restarted.

Last updated