- 🎯 MNIST Dataset Loading - Automatic download and preprocessing
- 🧠 Network Architecture - Type-generic float32 networks
- ⚡ WebGPU Acceleration - GPU-optimized training
- 📊 ADHD Performance Evaluation - Advanced scoring metrics
Type-generic, modular, and distributed neural networks with WebGPU acceleration, micro-network surgery, and revolutionary ADHD performance evaluation.
Live Neural Network Training
Distributed Micro-Network Evolution
Type-Generic Networks
Support for float32, float64, and even integer neural networks. One codebase, multiple numeric types.
Network[float32] // GPU optimized
Network[int64] // Memory efficient
Micro-Network Surgery
Extract, modify, and reintegrate neural network segments without losing model integrity or performance.
micro := net.ExtractMicroNetwork(layer)
improved.ReattachToOriginal(net)
ADHD Performance
Accuracy Deviation Heatmap Distribution scoring for precise model evaluation and optimization.
net.EvaluateModel(expected, actual)
score := net.ComputeFinalScore()
WebGPU Acceleration
Native GPU acceleration using WebGPU for lightning-fast training and inference on modern hardware.
Dynamic Growth
Networks that grow and evolve automatically based on performance metrics and data complexity.
net.Grow(checkpointLayer, data)
net.AddLayer(width, height, activation)
Distributed Training
Leverage multiple devices for parallel training with automatic load balancing and fault tolerance.
workers := []Device{phone, laptop}
net.TrainDistributed(workers)
Paragon's architecture is built around modular components that can be composed, extended, and distributed across different computing environments.
package main import "github.com/openfluke/paragon" func main() { // Create a type-generic neural network nn := paragon.NewNetwork[float32]( []struct{ Width, Height int }{ {28, 28}, // Input layer {16, 16}, // Hidden layer {10, 1}, // Output layer }, []string{"linear", "relu", "softmax"}, []bool{true, true, true}, ) // Enable WebGPU acceleration nn.WebGPUNative = true // Train the network nn.Train(inputs, targets, 100, 0.01) // Evaluate performance nn.EvaluateModel(expected, actual) fmt.Printf("Final Score: %.2f\n", nn.ComputeFinalScore()) }
// Advanced: Micro-network surgery and dynamic growth func advancedTraining(nn *paragon.Network[float32]) { // Extract micro-network for optimization micro := nn.ExtractMicroNetwork(1) // From layer 1 // Try improvements with different architectures improved, success := micro.TryImprovement( checkpointData, 8, 32, // Width range 8, 32, // Height range []string{"relu", "tanh", "leaky_relu"}, ) if success { // Reattach improved micro-network improved.ReattachToOriginal(nn) fmt.Println("🚀 Network improved!") } // Enable dynamic growth nn.Grow( 1, // Checkpoint layer testInputs, // Test data expectedOutputs, // Expected results 25, // Number of candidates 10, // Training epochs 0.01, // Learning rate 1e-6, // Tolerance 1.0, -1.0, // Clip bounds 8, 32, 8, 32, // Layer dimensions []string{"relu", "tanh"}, // Activations 8, // Max threads ) }
// Distributed training across multiple devices func distributedTraining() { // Setup network with distributed capabilities nn := paragon.NewNetwork[float32]( layerSizes, activations, fullyConnected, ) // Configure worker devices maxThreads := runtime.NumCPU() * 2 // Include remote devices // Batch training with distributed micro-network improvements for batch := 0; batch < numBatches; batch++ { batchInputs := trainInputs[start:end] batchTargets := trainTargets[start:end] // Distributed growth across multiple devices if nn.Grow( checkpointLayer, batchInputs, expectedLabels, 50, // More candidates with distributed processing 5, // Epochs per worker 0.01, // Learning rate 1e-6, // Tolerance 1.0, -1.0, // Clip bounds 16, 64, 16, 64, // Layer size range []string{"relu", "tanh", "leaky_relu"}, maxThreads, // Distributed workers ) { fmt.Printf("✅ Batch %d improved network!\n", batch) } } // Print growth history nn.PrintGrowthHistory() }
Training Speed
vs Traditional Frameworks
Memory Efficiency
Integer Networks
GPU Utilization
WebGPU Acceleration
Scalability
Distributed Architecture
Deploy intelligent models on mobile devices, IoT sensors, and edge computing platforms. Paragon's integer support and memory efficiency make it perfect for resource-constrained environments.
Rapid experimentation with novel architectures, ablation studies, and distributed training research. Type-generic design allows testing with different numeric precisions.
Cost-effective AI model training for enterprises looking to reduce cloud computing costs. Distribute training across existing hardware infrastructure.
Teach neural network concepts with clear, modular code. Students can experiment with different numeric types and see immediate results of architectural changes.
# Create a new Go project mkdir my-paragon-project cd my-paragon-project # Initialize Go module go mod init my-paragon-project # Install Paragon v3 (current stable version) go get github.com/openfluke/paragon/[email protected] go get github.com/openfluke/webgpu@ad2e76f
Using Paragon v3.0.1 for Stability
We recommend using the stable v3.0.1 release for production projects. The latest development version is available on the main branch.
The best way to get started is with our comprehensive MNIST example that demonstrates network growth and distributed training:
Complete MNIST Training with Network Growth
# Clone the complete example git clone https://github.com/openfluke/growonmnisttest.git cd growonmnisttest # Install dependencies go mod tidy # Run the full example go run . # Expected output: # 🚀 Preparing MNIST Dataset... # 📦 Loading MNIST data into memory... # 🧠 Training float32 model... # ⚡ Benchmarking inference... # ✅ GPU: 12.5ms vs CPU: 156.3ms # 🚀 Speedup: 12.5x
For a minimal example, create a main.go
file:
package main import ( "fmt" "github.com/openfluke/paragon/v3" ) func main() { // Create a simple neural network nn := paragon.NewNetwork[float32]( []struct{ Width, Height int }{ {28, 28}, // Input layer {16, 16}, // Hidden layer {10, 1}, // Output layer }, []string{"linear", "relu", "softmax"}, []bool{true, true, true}, ) // Enable WebGPU acceleration nn.WebGPUNative = true fmt.Println("🧠 Paragon Neural Network Ready!") fmt.Printf("📊 Network: %d layers\n", len(nn.Layers)) }
Next Steps
The growonmnisttest example is the perfect starting point - it demonstrates all of Paragon's key features including distributed training, network growth, and performance optimization in a real-world MNIST classification scenario.