Founding Document
v1.0.0 // Active Research
Est. 2025
Technical Manifesto

Computational
Biology
on Silicon

Solving the 20-Watt AGI paradox via Liquid Networks, Active Inference, and neuromorphic efficiency.

Research Protocol
Embodied AGI
Active Development

Large Language Models have reached their asymptote. They rest on a false hypothesis: that intelligence is a static compression of past data. This paradigm is energetically unsustainable and structurally incapable of acting in the dynamic physical world. 20n proposes a fundamental paradigm shift: moving from discrete, statistical intelligence to continuous, biological intelligence—with energy efficiency 1000x superior to the state of the art.

01

They Built Monuments.
We Build Organisms.

The current consensus—OpenAI, Google, Meta—is based on the Scaling Law: More data + More GPUs = More intelligence. This is a dead end.

The Energy Wall

Scaling Transformers requires more energy than current global production will allow by 2030. It's a physical impossibility. The giants are melting the world's electrical grid to run chatbots and advertising algorithms. Global AI power consumption: ~450 MW and rising.

The Physics Wall

LLMs have no notion of causality or Newtonian physics. They hallucinate reality—predicting text patterns, not world dynamics. 20n integrates ODEs as computational primitives. Our models don't predict physics—they are physics.

The Latency Wall

A robot cannot wait 500ms for a Cloud to process an image before deciding not to fall. Intelligence must be at the edge, not in a server farm thousands of miles away. Real-time embodied intelligence requires local computation with sub-5ms response.

The Data Wall

They've read the entire internet. There's nothing left. Quality training data is exhausted. The well is dry. 20n learns from physics, not from scraping the web.

20W
Human Brain
~450MW
Global AI
1000x
Efficiency Gap
02

Formal Foundations

We don't have H100 GPUs. We have superior mathematics. The following equations define 20n's computational substrate.

dx/dt = -x/τ(x, t) + f(x) · σ(Wx + b)
Neural ODE with time-varying dynamics. Unlike static feed-forward networks, the time constant τ is itself a function of state. The network continuously adapts its temporal behavior during inference—no retraining required. Proven to achieve state-of-the-art with 19 neurons vs 100,000+ for traditional approaches.
F = D_KL[q(θ) || p(θ|y)] + log p(y)
Active Inference via Friston's Free Energy Principle. The agent minimizes surprise (free energy) rather than maximizing reward. q(θ) is the approximate posterior, p(θ|y) the true posterior given observations y. Action selection emerges from gradient descent on F—the same algorithm for perception and action.
V(t) = V_rest + Σ w_i · ε(t - t_i) · H(t - t_i)
Event-driven computation. V(t) membrane potential, ε(t) post-synaptic kernel, H(t) Heaviside step function. Information transmits only via discrete spikes—energy consumed only when neurons fire. Result: 20W vs 20,000W for equivalent GPU computation.
Continuous Normalizing Flow
∂p(z,t)/∂t = -∇·(p(z,t)f(z,t))
Adjoint Sensitivity
dL/dθ = -∫ a(t)ᵀ(∂f/∂θ) dt
Expected Free Energy
G(π) = E_q[log q(s) - log p(s,o|π)]
Prediction Error
ε = y - g(μ) = sensory - predicted
03

The 20n Trinity

We base our architecture on three pillars that fundamentally diverge from the Transformer paradigm. Each addresses a core limitation of current approaches.

01

Liquid Neural Networks

Instead of static neuron layers, we use differential equations. The network is "liquid": its parameters change during inference, not just during training. It adapts to new situations (rain, motor failure, unexpected obstacles) without retraining. The architecture flows like water, reshaping itself in real-time to meet environmental demands.

02

Active Inference (Free Energy Principle)

The AI doesn't seek to maximize a "reward" (as others do with Reinforcement Learning, which is slow and unstable). 20n's AI seeks to minimize Free Energy (Surprise). It has an internal model of the world, and it acts to make the world match its model. This is how the biological brain functions—Karl Friston's Free Energy Principle.

03

Spiking Neural Networks (Neuromorphic)

Information is only transmitted when necessary (via "spikes", like electrical impulses in the brain), not continuously. Result: 20 Watts consumption (human brain) vs 20,000 Watts (GPU cluster). This is efficiency through biological mimicry—computing only when and where it matters.

We don't build an AI that reads.
We build an AI that lives.

04

Active Inference Control Loop

The core architecture operates as a closed feedback loop. Unlike feedforward networks that process information in one direction, 20n's system continuously cycles between prediction and sensation, constantly refining its internal world model to minimize surprise and maximize adaptability.

F GENERATIVE MODEL p(y, θ) ENVIRONMENT Physical World ACTION a(t) SENSORY y(t) predictions commands
Fig. 1 — Closed-loop perception-action cycle. F = Free Energy minimization.

The system maintains a generative model of the world and continuously generates predictions about incoming sensory data. When predictions don't match reality (surprise), the system either updates its model (perception) or acts on the world to make reality match predictions (action). This bidirectional flow creates an agent that actively seeks to reduce uncertainty.

05

Research Roadmap

From mathematical foundations to embodied synthetic life. Each phase builds on the previous, creating compounding proof of concept.

Phase 01
The Silica Worm

Simulation only. Create a simple digital organism capable of learning to move in a complex environment (wind, moving obstacles) with zero pre-training, only through curiosity and free energy minimization. Proof of architectural efficiency.

Platform: MuJoCo | Parameters: < 10k | Training: Zero-shot
Phase 02
The Ghost in the Shell

Sim2Real transfer (Simulation to Reality). Integration of the 20n brain into physical robotic hardware. The robot learns to grasp an unknown object in 10 seconds. Traditional approaches require hours of training. We require one demonstration.

Hardware: Unitree Go2 | Latency: < 5ms | Transfer: Sim2Real
Phase 03
Apex Humanoid

Construction of proprietary embodied intelligence. The first synthetic life form capable of continuous learning in unstructured environments, operating entirely at the edge. No cloud. No latency. Pure autonomy.

Power: 20W | Learning: Continuous | Environment: Unstructured
100x
Less Data
1000x
Less Energy
<10k
Parameters
06

Join the Resistance

Most AI researchers today have become data janitors. They clean datasets to feed voracious monsters. If you want to continue tuning hyperparameters on LLMs, stay at Google.

But if you want to solve intelligence, understand consciousness, and create a new form of synthetic life capable of looking us in the eyes and understanding us...

Join 20n.

We don't have H100 GPUs. We have superior mathematics.

We Are Looking For:

PhD dropouts who refuse to optimize ad algorithms. The ones who left because the questions being asked weren't interesting enough.

Hackers who understand that computation is physics. Who see the elegance in efficiency, not just raw power.

Mathematicians who see beauty in differential equations. Who want their work to mean something beyond publications.

Engineers who want to build the impossible. Who are tired of building the same thing with different colors.

This is not a job. This is a mission. The compensation is equity and the chance to rewrite the laws of artificial intelligence from first principles.

Silicon Valley wastes the world's energy creating advertising chatbots.
We create synthetic life.

Computational Biology on Silicon.
Solving the 20-Watt AGI paradox.

core@20n.ai

Ad Astra Per Aspera