You now have a key that you can set on the headless environment using
exportW_AI_API_KEY=yourkeyhere
And then run normally (in the headless environment) as follows:
wairun
Headless environment key management
You can list generated API keys using
And revoke any using
For more info, run
Specifying GPUs (NVIDIA only)
If you want to run with a subset of your GPUs, add a -g flag to the run command followed by a list of the GPU IDs. For example, to run on GPUs 0 and 1, the command would be:
NVIDIA drivers must be installed on the host machine to use CUDA. w.ai will fallback to Vulkan otherwise. See nvidia-container-toolkit for more info.
AMD GPUs with Vulkan
For AMD GPUs:
Using environment files
Instead of passing the API key inline, you can use an environment file for better security:
Create a .env file:
And specify the .env file in the docker command
MacOS
M-Series MacOS computers do not support GPU passthrough in containers. As such w.ai on MacOS is unsupported through containers and must run on the root machine.
Volume mounts
Volume mounting `~/.wombo:/root/.wombo` is highly recommended to avoid redownloading dependencies/models each time the container is restarted.