ML engineers juggle 10–20 API keys across OpenAI, Anthropic, Mistral, HuggingFace, W&B, and more.
Keys rotate, leak in committed .env files, and there's no standard way to manage them across experiments.
llmvlt fixes that — one encrypted vault, provider-aware validation, safe subprocess injection.
$ curl -sSL https://llmvlt.dev/install.sh | sh
> irm https://llmvlt.dev/install.ps1 | iex
$ go install github.com/moronim/llmvlt@latest
Secrets encrypted at rest with a master password. Single file vault, 0600 permissions. No plaintext ever.
Knows that OpenAI keys start with sk-, HF tokens with hf_. Blocks bad formats before they waste your GPU hours.
llmvlt run -- python train.py injects secrets into the child process only. Never touches your shell or history.
12 built-in presets for OpenAI, Anthropic, HuggingFace, Cohere, Mistral, W&B, and more. One command to scaffold.
Tag runs, review which keys were active. llmvlt history shows your experiment audit trail.
Inject as shell exports, .env files, or Jupyter os.environ cells. Pipe-friendly output.
Provider-aware scaffolding. Each preset knows the keys, formats, and rotation policies.