w.ai CLI Guide
To get started with the CLI, install it as mentioned in Quick Start
On a desktop environment
If on a device with a desktop environment/a browser, start by logging in:
wai loginAnd then run using
wai runOn a headless environment
Visit the w.ai dashboard.
Login or create a w.ai account.
Select
API Keysand create a new key.
You now have a key that you can set on the headless environment using
export W_AI_API_KEY=your key hereAnd then run normally (in the headless environment) as follows:
wai runHeadless environment key management
You can list generated API keys using
And revoke any using
For more info, run
Specifying GPUs (NVIDIA only)
If you want to run with a subset of your GPUs, add a -g flag to the run command followed by a list of the GPU IDs. For example, to run on GPUs 0 and 1, the command would be:
If no GPU is provided, it will be ran on GPU 0
Docker
For Docker/Podman linux containers, we provide optimized CLI images:
NVIDIA GPUs with CUDA (Recommended)
For optimal performance on NVIDIA GPUs:
NVIDIA drivers must be installed on the host machine to use CUDA. w.ai will fallback to Vulkan otherwise. See nvidia-container-toolkit for more info.
AMD GPUs with Vulkan
For AMD GPUs:
Using environment files
Instead of passing the API key inline, you can use an environment file for better security:
Create a
.envfile:
And specify the .env file in the docker command
MacOS
M-Series MacOS computers do not support GPU passthrough in containers. As such w.ai on MacOS is unsupported through containers and must run on the root machine.
Volume mounts
Volume mounting `~/.wombo:/root/.wombo` is highly recommended to avoid redownloading dependencies/models each time the container is restarted.
Last updated
