Deploy and run ML experiments on local or remote GPU servers. Use when user says "run experiment", "deploy to server", "跑实验", or needs to launch training jobs.
86
83%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Risky
Do not use without reviewing
Deploy and run ML experiment: $ARGUMENTS
Read the project's AGENTS.md to determine the experiment environment:
gpu: vast, vast_instance, SSH host/port, remote path, and optional auto_destroygpu: modal, app/function name, image/dependency setup, and secretsIf no server info is found in AGENTS.md, ask the user.
Check GPU availability on the target machine:
Remote:
ssh <server> nvidia-smi --query-gpu=index,memory.used,memory.total --format=csv,noheaderLocal:
nvidia-smi --query-gpu=index,memory.used,memory.total --format=csv,noheader
# or for Mac MPS:
python -c "import torch; print('MPS available:', torch.backends.mps.is_available())"Free GPU = memory.used < 500 MiB.
Check the project's AGENTS.md for a code_sync setting. If not specified, default to rsync.
Only sync necessary files — NOT data, checkpoints, or large files:
rsync -avz --include='*.py' --exclude='*' <local_src>/ <server>:<remote_dst>/code_sync: git is set in AGENTS.md)Push local changes to remote repo, then pull on the server:
# 1. Push from local
git add -A && git commit -m "sync: experiment deployment" && git push
# 2. Pull on server
ssh <server> "cd <remote_dst> && git pull"Benefits: version-tracked, multi-server sync with one push, no rsync include/exclude rules needed.
If gpu: vast is configured, treat the Vast.ai machine as a remote server with an explicit lifecycle:
Do not silently ignore a requested Vast.ai route. If Vast.ai CLI credentials or instance metadata are missing, stop and ask the user to configure them.
wandb: true in AGENTS.md)Skip this step entirely if wandb is not set or is false in AGENTS.md.
Before deploying, ensure the experiment scripts have W&B logging:
Check if wandb is already in the script — look for import wandb or wandb.init. If present, skip to Step 4.
If not present, add W&B logging to the training script:
import wandb
wandb.init(project=WANDB_PROJECT, name=EXP_NAME, config={...hyperparams...})
# Inside training loop:
wandb.log({"train/loss": loss, "train/lr": lr, "step": step})
# After eval:
wandb.log({"eval/loss": eval_loss, "eval/ppl": ppl, "eval/accuracy": acc})
# At end:
wandb.finish()Metrics to log (add whichever apply to the experiment):
train/loss — training loss per steptrain/lr — learning rateeval/loss, eval/ppl, eval/accuracy — eval metrics per epochgpu/memory_used — GPU memory (via torch.cuda.max_memory_allocated())speed/samples_per_sec — throughputVerify wandb login on the target machine:
ssh <server> "wandb status" # should show logged in
# If not logged in:
ssh <server> "wandb login <WANDB_API_KEY>"The W&B project name and API key come from
AGENTS.md(see example below). The experiment name is auto-generated from the script name + timestamp.
For each experiment, create a dedicated screen session with GPU binding:
ssh <server> "screen -dmS <exp_name> bash -c '\
eval \"\$(<conda_path>/conda shell.bash hook)\" && \
conda activate <env> && \
CUDA_VISIBLE_DEVICES=<gpu_id> python <script> <args> 2>&1 | tee <log_file>'"Use the same SSH + screen pattern, but include the Vast.ai instance id, public SSH endpoint, and remote working directory in the report. If auto_destroy: true, write a cleanup command to the run notes before launch.
Record the estimated hourly cost, expected run duration, and cleanup owner. If the command fails to start or the instance becomes unreachable, do not relaunch blindly; capture logs and ask for a rescue / second opinion before spending more GPU time.
If gpu: modal is configured, deploy through Modal instead of SSH:
modal run <module_or_app>.py -- <args>Before launch, verify required secrets, volumes, image dependencies, and output persistence. If Modal is requested but the project lacks Modal configuration, stop and ask the user to configure it rather than falling back to local execution.
Record the Modal app/function name, GPU type, timeout, mounted volumes, and where results will be stored. If Modal reports an image, secret, or volume error, preserve the exact error and run a configuration fix before retrying.
# Linux with CUDA
CUDA_VISIBLE_DEVICES=<gpu_id> python <script> <args> 2>&1 | tee <log_file>
# Mac with MPS (PyTorch uses MPS automatically)
python <script> <args> 2>&1 | tee <log_file>For local long-running jobs, use run_in_background: true to keep the conversation responsive.
Remote:
ssh <server> "screen -ls"Local: Check process is running and GPU is allocated.
After deployment is verified, check ~/.codex/feishu.json:
experiment_done notification: which experiments launched, which GPUs, estimated time"off": skip entirely (no-op)gpu: vast and auto_destroy: true)Only run this after the experiment has completed and results/logs/checkpoints have been copied or otherwise persisted.
auto_destroy: true.If any artifact copy fails, do not destroy the instance.
tee to save logs for later inspectionrun_in_background: true to keep conversation responsiveUsers should add their server info to their project's AGENTS.md:
## Remote Server
- SSH: `ssh my-gpu-server`
- GPU: 4x A100 (80GB each)
- Conda: `eval "$(/opt/conda/bin/conda shell.bash hook)" && conda activate research`
- Code dir: `/home/user/experiments/`
- code_sync: rsync # default. Or set to "git" for git push/pull workflow
- wandb: false # set to "true" to auto-add W&B logging to experiment scripts
- wandb_project: my-project # W&B project name (required if wandb: true)
- wandb_entity: my-team # W&B team/user (optional, uses default if omitted)
## Vast.ai
- gpu: vast
- vast_instance: 123456
- SSH: `ssh -p 12345 root@ssh.vast.ai`
- Code dir: `/workspace/experiments/`
- auto_destroy: false
## Modal
- gpu: modal
- modal_app: `train.py`
- modal_secrets: `wandb-secret`
- modal_volume: `experiment-results`
## Local Environment
- Mac MPS / Linux CUDA
- Conda env: `ml` (Python 3.10 + PyTorch)W&B setup: Run
wandb loginon your server once (or setWANDB_API_KEYenv var). The skill reads project/entity fromAGENTS.mdand addswandb.init()+wandb.log()to your training scripts automatically. Dashboard:https://wandb.ai/<entity>/<project>.
700fbe2
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.