# w\.ai CLI Guide

To get started with the CLI, install it as mentioned in [quick-start](https://docs.w.ai/get-started/quick-start "mention")

## On a desktop environment

If on a device with a desktop environment/a browser, start by logging in:

```bash
wai login
```

And then run using

```bash
wai run
```

## On a headless environment

1. Visit the [w.ai dashboard](https://app.w.ai/dashboard).&#x20;
2. Login or create a w\.ai account.
3. Select `Auth API Keys` and create a new key.

You now have a key that you can set on the headless environment using

```bash
export W_AI_API_KEY=your key here
```

And then run normally (in the headless environment) as follows:

```bash
wai run
```

### Headless environment key management

You can list generated API keys using

```bash
wai key list
```

And revoke any using

```bash
wai revoke <token>
```

For more info, run

```bash
wai key help
```

## Specifying GPUs (NVIDIA only)

If you want to run with a subset of your GPUs, add a `-g`  flag to the run command followed by a list of the GPU IDs. For example, to run on GPUs 0 and 1, the command would be:

```bash
wai run -g 0 1
```

If no GPU is provided, it will be ran on GPU 0

### Docker

For Docker/Podman linux containers, we provide [optimized CLI images](https://hub.docker.com/r/wdotai/wai/tags):

**NVIDIA GPUs with CUDA (Recommended)**

For optimal performance on NVIDIA GPUs:

```bash
docker run --gpus all \
           -v ~/.wombo:/root/.wombo \
           -e W_AI_API_KEY=your_key_here \
           wdotai/wai:latest run
```

NVIDIA drivers must be installed on the host machine to use CUDA. w\.ai will fallback to Vulkan otherwise. See [nvidia-container-toolkit](https://github.com/NVIDIA/nvidia-container-toolkit?tab=readme-ov-file#getting-started) for more info.

**AMD GPUs with Vulkan**

For AMD GPUs:

```bash
docker run --device=/dev/dri:/dev/dri \
           -v ~/.wombo:/root/.wombo \
           -e W_AI_API_KEY=your_key_here \
           wdotai/wai:latest run
```

#### **Using environment files**

Instead of passing the API key inline, you can use an environment file for better security:

1. Create a `.env` file:<br>

   ```bash
   echo "W_AI_API_KEY=your_key_here" > .env
   ```

And specify the `.env` file in the docker command

```bash
docker run <GPU CONFIGURATION> \
           -v ~/.wombo:/root/.wombo \
           --env-file .env \
           wdotai/wai:latest run
```

#### MacOS

M-Series MacOS computers do not support GPU passthrough in containers. As such w\.ai on MacOS is unsupported through containers and must run on the root machine.

#### Volume mounts

Volume mounting \`\~/.wombo:/root/.wombo\` is highly recommended to avoid redownloading dependencies/models each time the container is restarted.
