LOOM Ecosystem

GPU-accelerated neural networks in Go. Train models anywhere, deploy everywhere—browser, desktop, mobile. WebGPU compute, WASM export, multi-language bindings.

LOOM: Layered Omni-architecture Openfluke Machine

Written in Go 1.24+
Apache-2.0
WebGPU + CPU

LOOM is a modern neural network framework combining Go's simplicity with GPU acceleration. Features 5 layer types (Dense, Conv2D, Multi-Head Attention, RNN, LSTM), native Mixture of Experts, and seamless deployment across browsers (WASM), desktop, and mobile platforms with multi-language bindings.

WebGPU Compute WASM 5 Layer Types MoE (Grid Softmax) Multi-Language Cross-Platform

Core Features

LOOM provides a complete neural network framework with GPU acceleration, flexible architectures, and deployment options for any platform.

Multi-Language Packages

Use LOOM from your favorite programming language with official bindings for Python, TypeScript/JavaScript, C#/.NET, and C/C++/Rust via C-ABI.

Deployment Targets

Deploy LOOM neural networks anywhere: browsers via WASM, desktop applications, mobile apps, or cloud servers. Same model, same code, runs everywhere.

Browser
WebGPU / CPU
WASM (5.4MB)
Zero dependencies
Desktop
Windows
Linux
macOS
Mobile
Android
iOS
Cross-compiled
Server
Linux ARM64
x86_64
Cloud ready

Performance: GPU acceleration with WebGPU compute shaders on supported platforms. CPU fallback ensures compatibility everywhere. View benchmarks.

Open Source & Community

LOOM Resources

Related Projects