Jupyter Notebooks
What is a Jupyter Notebook?
A Jupyter Notebook is an open-source web application for creating and sharing documents containing:
Live code
Equations
Visualizations
Narrative text
It is a standard tool for data science, machine learning, and AI research.
Getting Started with Your Notebook
When you open your rented Jupyter environment, you'll see the Jupyter file browser. From here, you can:
Create a new notebook:
Click
New → Python 3 Kernelto start a fresh notebook.
Upload files:
Use the
Uploadbutton to bring in datasets or existing notebooks.
Using GPU in Your Notebook
NVIDIA CUDA Environment
Your CUDA environment comes with PyTorch pre-installed.
Verify GPU access:
Load and run a model on GPU:
Apple MLX Environment
Your MLX environment comes with Apple's ML framework pre-installed.
Check the MLX backend:
Create arrays on the GPU:
Run language models with mlx-lm:
Installing Additional Packages
Use pip directly in a notebook cell, e.g.
Remember: Installed packages are ephemeral and do not persist after your session ends. Include your pip install commands at the top of your notebook for future sessions.
Learn more about Jupyter Notebooks at the official docs: https://docs.jupyter.org/en/latest
Tips for Productive Sessions
Save your work frequently. Download notebooks and outputs before your session expires.
Monitor VRAM usage. Use
!nvidia-smi(CUDA) or check memory in your code to avoid out-of-memory errors.Use efficient data types. Load models in
float16orint4to maximize VRAM usage.Plan for ephemeral storage. Upload datasets at the start of each session and download results before ending.
Extend if needed. If your training job requires more time, extend your rental before it expires to avoid interruptions.
Common Workflows
Prototyping & Testing
Any available GPU
30 min – 1 hour
Fine-tuning small models
RTX 4070+ (12+ GB VRAM)
2 – 4 hours
Fine-tuning large models
RTX 4090 / Multi-GPU (24+ GB VRAM)
4 – 8 hours
Inference & Evaluation
Any GPU matching model requirements
30 min – 1 hour
Data Processing
Any GPU with sufficient VRAM
1 – 2 hours
MLX Fine-tuning or Inference
Apple M-series (16+ GB unified memory)
1 – 2 hours
Last updated
