The Universal
AI Engine
Loom lets you train neural networks once and run them anywhere — Python, JavaScript, Go, iOS, Android, WebAssembly — with bit-perfect identical results across every platform.
Independent AI Analysis of Loom
AI researchers performed an exhaustive technical analysis of the Loom architecture — listen to the podcast, or read the full findings.
Most AI frameworks process data in a straight line, like an assembly line. Loom uses a three-dimensional grid — more like how your brain's neurons actually connect, jumping across regions rather than always going layer by layer.
Loom can compress AI models by up to 98.4%. A model that normally takes gigabytes of storage can shrink to a fraction — small enough to run on a phone or an old laptop with no internet required.
Traditional AI learning requires freezing everything to calculate one massive equation. Loom's Target Propagation lets each part of the network learn independently — more like how neurons fire and strengthen in a real brain.
What is Loom, exactly?
"Think of Loom like SQLite — but for AI."
SQLite is a tiny database that runs inside your app with no server needed. Loom is the same idea for neural networks: a self-contained engine you can drop into any project, on any device, with no cloud account, no GPU server, no complicated setup.
A neural network learns by seeing examples — like showing a child thousands of pictures of cats until they know what a cat is. Loom provides all the tools to build and teach these networks.
Once trained, your model is a tiny file. Drop it into your Python script, your phone app, your website, or a game engine. Loom runs it everywhere with the exact same output.
Unlike ChatGPT or other AI services, Loom runs 100% locally on your device. Your data never leaves your machine. Perfect for privacy-sensitive apps or offline use.
On supported devices, Loom uses your GPU through WebGPU — achieving 17× to 65× faster training than CPU. Works in browsers too.
Python developer? pip install welvet. JavaScript? npm install @openfluke/welvet.
Go, C, C#, Rust? There are bindings for all of them. One model, every language.
Loom includes a full NEAT evolution engine — models can mutate and breed like living organisms. This powers SoulGlitch's creature evolution system.
Install in 30 seconds
Pick your language and paste the command. No account required.
Ships with precompiled native libraries for Windows, Linux, macOS, iOS, and Android. Zero Python dependencies. PyPI page →
Works in Node.js and browsers via WebAssembly. npm page →
Pure Go module. No CGO. Works with standard go build.
Docs on GitHub →
6.9 MB WASM bundle. Drop into any web page and run Loom in the browser. All releases →
Runs everywhere
Prebuilt native libraries for every major platform — just download and go.
What's under the hood
Loom isn't just a wrapper around PyTorch. It's a ground-up engine built for portability and precision.
Dense, MHA, SwiGLU, RMSNorm, LayerNorm, CNN 1D/2D/3D, Transposed Conv, RNN, LSTM, Embedding, KMeans, Softmax, Parallel, Sequential, Residual.
float64 all the way down to binary (1-bit), including fp8, fp4, int4, and ternary. Choose precision vs. model size at runtime.
A full neuroevolution engine with mutation, crossover, and fitness selection. Models have a "DNA" signature for reproducible evolution.
Native bit-packed serialization shrinks model files by 98.4% compared to raw float storage. Plus SafeTensors support for HuggingFace compatibility.
An alternative to backpropagation where each layer is given a direct target. More biologically plausible and works for non-differentiable layers.
An online learning architecture modelled after systolic arrays. Weight updates happen continuously as data streams through, no epochs needed.
See Loom In Action
Real demos — benchmarks, Android offline inference, 3D network visualization, and tooling built on Loom.
Star Loom on GitHub
Loom is free, open-source, and built in the open. Stars help others find it and fuel continued development.