Paragon

Next-Generation Neural Network Framework

Type-generic, modular, and distributed neural networks with WebGPU acceleration, micro-network surgery, and revolutionary ADHD performance evaluation.

WebGPU Accelerated Type Generic Distributed

Live Neural Network Training

Distributed Micro-Network Evolution

Revolutionary Features

Type-Generic Networks

Support for float32, float64, and even integer neural networks. One codebase, multiple numeric types.

Network[float32] // GPU optimized
Network[int64] // Memory efficient

Micro-Network Surgery

Extract, modify, and reintegrate neural network segments without losing model integrity or performance.

micro := net.ExtractMicroNetwork(layer)
improved.ReattachToOriginal(net)

ADHD Performance

Accuracy Deviation Heatmap Distribution scoring for precise model evaluation and optimization.

net.EvaluateModel(expected, actual)
score := net.ComputeFinalScore()

WebGPU Acceleration

Native GPU acceleration using WebGPU for lightning-fast training and inference on modern hardware.

GPU Optimized

Dynamic Growth

Networks that grow and evolve automatically based on performance metrics and data complexity.

net.Grow(checkpointLayer, data)
net.AddLayer(width, height, activation)

Distributed Training

Leverage multiple devices for parallel training with automatic load balancing and fault tolerance.

workers := []Device{phone, laptop}
net.TrainDistributed(workers)

Architecture Overview

Modular Design

Paragon's architecture is built around modular components that can be composed, extended, and distributed across different computing environments.

Core Components

  • Grid[T] - Type-generic neuron layers
  • Neuron[T] - Individual processing units
  • Connection[T] - Weighted links
  • Network[T] - Complete model container

Advanced Features

  • MicroNetwork - Extractable sub-models
  • ADHDPerformance - Evaluation system
  • GrowthLog - Evolution tracking
  • GPUCompute - Hardware acceleration

Network Layers

Input 28×28
Hidden 16×16
Hidden 32×32
Output 10×1

Quick Start Guide

package main

import "github.com/openfluke/paragon"

func main() {
    // Create a type-generic neural network
    nn := paragon.NewNetwork[float32](
        []struct{ Width, Height int }{
            {28, 28}, // Input layer
            {16, 16}, // Hidden layer
            {10, 1},  // Output layer
        },
        []string{"linear", "relu", "softmax"},
        []bool{true, true, true},
    )
    
    // Enable WebGPU acceleration
    nn.WebGPUNative = true
    
    // Train the network
    nn.Train(inputs, targets, 100, 0.01)
    
    // Evaluate performance
    nn.EvaluateModel(expected, actual)
    fmt.Printf("Final Score: %.2f\n", nn.ComputeFinalScore())
}
// Advanced: Micro-network surgery and dynamic growth
func advancedTraining(nn *paragon.Network[float32]) {
    // Extract micro-network for optimization
    micro := nn.ExtractMicroNetwork(1) // From layer 1
    
    // Try improvements with different architectures
    improved, success := micro.TryImprovement(
        checkpointData,
        8, 32,    // Width range
        8, 32,    // Height range
        []string{"relu", "tanh", "leaky_relu"},
    )
    
    if success {
        // Reattach improved micro-network
        improved.ReattachToOriginal(nn)
        fmt.Println("🚀 Network improved!")
    }
    
    // Enable dynamic growth
    nn.Grow(
        1,                    // Checkpoint layer
        testInputs,          // Test data
        expectedOutputs,     // Expected results
        25,                   // Number of candidates
        10,                   // Training epochs
        0.01,                 // Learning rate
        1e-6,                 // Tolerance
        1.0, -1.0,            // Clip bounds
        8, 32, 8, 32,      // Layer dimensions
        []string{"relu", "tanh"}, // Activations
        8,                    // Max threads
    )
}
// Distributed training across multiple devices
func distributedTraining() {
    // Setup network with distributed capabilities
    nn := paragon.NewNetwork[float32](
        layerSizes, activations, fullyConnected,
    )
    
    // Configure worker devices
    maxThreads := runtime.NumCPU() * 2 // Include remote devices
    
    // Batch training with distributed micro-network improvements
    for batch := 0; batch < numBatches; batch++ {
        batchInputs := trainInputs[start:end]
        batchTargets := trainTargets[start:end]
        
        // Distributed growth across multiple devices
        if nn.Grow(
            checkpointLayer,
            batchInputs,
            expectedLabels,
            50,           // More candidates with distributed processing
            5,            // Epochs per worker
            0.01,         // Learning rate
            1e-6,         // Tolerance
            1.0, -1.0,    // Clip bounds
            16, 64, 16, 64, // Layer size range
            []string{"relu", "tanh", "leaky_relu"},
            maxThreads,   // Distributed workers
        ) {
            fmt.Printf("✅ Batch %d improved network!\n", batch)
        }
    }
    
    // Print growth history
    nn.PrintGrowthHistory()
}

Performance Benchmarks

15x

Training Speed

vs Traditional Frameworks

60%

Memory Efficiency

Integer Networks

95%

GPU Utilization

WebGPU Acceleration

Scalability

Distributed Architecture

Use Cases & Applications

Edge AI Development

Deploy intelligent models on mobile devices, IoT sensors, and edge computing platforms. Paragon's integer support and memory efficiency make it perfect for resource-constrained environments.

Mobile AI IoT Real-time

Research & Prototyping

Rapid experimentation with novel architectures, ablation studies, and distributed training research. Type-generic design allows testing with different numeric precisions.

Research Prototyping Experimentation

Enterprise AI

Cost-effective AI model training for enterprises looking to reduce cloud computing costs. Distribute training across existing hardware infrastructure.

Cost Reduction Enterprise Scalable

Educational Platforms

Teach neural network concepts with clear, modular code. Students can experiment with different numeric types and see immediate results of architectural changes.

Education Learning Interactive

Getting Started

Quick Installation

# Create a new Go project
  mkdir my-paragon-project
  cd my-paragon-project
  
  # Initialize Go module
  go mod init my-paragon-project
  
  # Install Paragon v3 (current stable version)
  go get github.com/openfluke/paragon/[email protected]
  go get github.com/openfluke/webgpu@ad2e76f

Using Paragon v3.0.1 for Stability

We recommend using the stable v3.0.1 release for production projects. The latest development version is available on the main branch.

Complete Working Example

The best way to get started is with our comprehensive MNIST example that demonstrates network growth and distributed training:

growonmnisttest

Complete MNIST Training with Network Growth

What the Example Includes:

  • 🎯 MNIST Dataset Loading - Automatic download and preprocessing
  • 🧠 Network Architecture - Type-generic float32 networks
  • ⚡ WebGPU Acceleration - GPU-optimized training
  • 📊 ADHD Performance Evaluation - Advanced scoring metrics
  • 🌱 Dynamic Network Growth - Automatic layer addition
  • 🔬 Micro-Network Surgery - Distributed optimization
  • 📈 Real-time Monitoring - Growth history tracking
  • 🏆 Benchmarking - CPU vs GPU performance

Run the Example

# Clone the complete example
  git clone https://github.com/openfluke/growonmnisttest.git
  cd growonmnisttest
  
  # Install dependencies
  go mod tidy
  
  # Run the full example
  go run .
  
  # Expected output:
  # 🚀 Preparing MNIST Dataset...
  # 📦 Loading MNIST data into memory...
  # 🧠 Training float32 model...
  # ⚡ Benchmarking inference...
  # ✅ GPU: 12.5ms vs CPU: 156.3ms
  # 🚀 Speedup: 12.5x

Simple Quick Start

For a minimal example, create a main.go file:

package main
  
  import (
      "fmt"
      "github.com/openfluke/paragon/v3"
  )
  
  func main() {
      // Create a simple neural network
      nn := paragon.NewNetwork[float32](
          []struct{ Width, Height int }{
              {28, 28}, // Input layer
              {16, 16}, // Hidden layer
              {10, 1},  // Output layer
          },
          []string{"linear", "relu", "softmax"},
          []bool{true, true, true},
      )
      
      // Enable WebGPU acceleration
      nn.WebGPUNative = true
      
      fmt.Println("🧠 Paragon Neural Network Ready!")
      fmt.Printf("📊 Network: %d layers\n", len(nn.Layers))
  }

System Requirements

💻 Minimum Requirements
  • Go 1.19+ - Latest Go runtime
  • 4GB RAM - For basic models
  • 500MB Storage - Framework + dependencies
  • CPU - Any modern x64/ARM processor
🚀 Recommended for GPU
  • WebGPU Support - Modern GPU drivers
  • 8GB+ RAM - For larger models
  • NVIDIA/AMD/Intel GPU - DirectX 12 / Vulkan
  • Windows 10+, Linux, macOS

Next Steps

The growonmnisttest example is the perfect starting point - it demonstrates all of Paragon's key features including distributed training, network growth, and performance optimization in a real-world MNIST classification scenario.