My Journey: From Frustration to Emergence

I started my AI journey with curiosity and a sense of awe. When I first discovered NEAT and Neural Architecture Search (NAS), I expected vast, diverse methods of AI creation. But what I quickly realized was surprising: most approaches were simply variations on tweaking neurons, wrapped in different labels and terminology. It was like walking into a coffee shop expecting dozens of drinks, only to find they were all just espresso with different milk.

That realization sparked something in me. I didn’t want to copy recipes—I wanted to build the coffee machine.

I gravitated toward TensorFlow.js because I loved the idea of running models in the browser. I experimented with Three.js and Babylon.js to visualize and simulate environments. Three.js felt stable, but Babylon and TensorFlow.js were riddled with memory leaks that made experimentation frustrating. Despite those obstacles, I built a planet simulation where small AI-driven spheres had to approach a central object. Creating gravity physics for a curved surface wasn’t simple, but I made it work.

Then, something strange happened. I built an AI controller that spawned these agents and ran them through generations. I added just a little noise to each generation so the models wouldn’t be identical. Over time, they started to act differently. They began to move in waves. Then, one day, they started flocking like birds.

I hadn’t coded flocking.
It emerged.

That moment changed me.

But I wasn’t satisfied. I was annoyed that I couldn’t inspect or manually tweak TensorFlow.js neurons and connections. I needed more control. So I began building my own AI framework and simulation system from scratch.

It was messy at first. A monolith. But every version taught me something new. Eventually, I discovered how to split neural networks into chunks and run them across machines—layered checkpointing. That allowed me to do partial evaluations, fine-tune sections, and distribute the workload.

Then came reinforcement learning exploration. I built a method to test whether a small adjustment to one layer could help a sample be learned better. I could re-evaluate the entire model afterward to see if the small change improved performance globally.

To track those micro-improvements, I invented a custom metrics system I called the ADHD metrics (a nod to chasing every little improvement). It didn’t just check 1:1 matches. It scored how close the model got, grouped outputs into tiers, and measured partial correctness.

This gave me a whole new lens to see learning. Instead of binary accuracy, I could watch a model improve by 1%, 3%, or 5% across the entire dataset. I could literally see it learning.

What began as frustration evolved into invention. And what started as experiments with AI turned into a complete AI architecture, simulation framework, and eventually a living, evolving world where structures grow, move, and adapt—driven by emergent intelligence.

I didn’t follow a formula. I built my own.
And I’m just getting started.