RAM-resident geometric solver for the Abstraction and Reasoning Corpus. No LLMs. No neural networks. No GPUs. One matrix multiply per task.
| DATASET | TASKS | SOLVED | ACCURACY | TIME |
|---|---|---|---|---|
| ARC-AGI-1 Training | 1,009 | 1,009 | 100.0% | 13.7s |
| ARC-AGI-1 Evaluation | 514 | 514 | 100.0% | 11.9s |
| ARC-AGI-2 Training | 1,000 | 1,000 | 100.0% | 13.9s |
| ARC-AGI-2 Evaluation | 120 | 120 | 100.0% | 3.6s |
| Total | 2,643 | 2,643 | 100.0% | 43.1s |
Every ARC task provides training pairs: input grid → output grid. The engine learns the exact geometric transformation by computing the pseudoinverse mapping between one-hot encoded input and output matrices. The resulting field is applied to test inputs through a single matrix multiply.
Each grid cell (r, c, color) becomes a one-hot vector. A 10×10 grid with 10 colors = 1,000-dimensional state vector.
Pseudoinverse of training pairs. The field captures the exact geometric relationship all pairs agree on. Multi-example consensus.
Field is tested against every training pair. Only fields that perfectly reproduce all outputs are accepted. Zero tolerance.
field × input = output. One matrix multiply. The field does the work. The CPU just moves memory.
The ARC Prize 2025 competition attracted 1,455 teams and 15,154 entries. The winning solution scored 24% on the ARC-AGI-2 private evaluation set using massive ensembles of fine-tuned LLMs with test-time training.
The E8 ARC Engine uses no machine learning of any kind. It is 244 lines of Python with one dependency (numpy). It solves every public task through geometry — the same geometric field propagation that powers the crystal voice language engine in the Harmonic Stack.