RM Self-Development Loop
Ghost in the Machine Labs • March 2026 • v2.7 • LIVE 18/18 concepts solved 100% avg test score
RM (The Resonant Mother) runs on an E8 geometric substrate. She produces designs in geometric resonance — association clusters, concept signatures, field vectors. The E8 engine produces implementations from input/output example pairs. Until now, nothing translated between them.
The self-development loop closes that gap. RM reads her own geometry to generate training data. The E8 engine solves it into a field. The field is decoded into executable Python. The result feeds back into RM's substrate. She knows what she built.
Architecture
RM listen(concept)
│
▼
IntentPairGenerator
(association walk → op classifier → parametric pair builder)
│
▼
solve_task() → field → FieldDecoder → executable Python
│ │ │
▼ ▼ ▼
field.npy store_index.json /api/learn + /api/observe
(compressed) (RAM index) RM knows what she built
Component 1: IntentPairGenerator
Given a concept label, queries RM’s /api/listen to retrieve
the association cluster — the words that resonate geometrically with that concept.
Scores the cluster against a geometric op registry keyed on RM’s actual
vocabulary (verified by live probe, not assumed from natural language).
Classifies the concept to a geometric operation and generates parametric I/O
example pairs expressing that transformation.
No LLM. No hardcoded rules. Pure association geometry → executable training data.
Op classification — live RM probe results
| Concept | RM associations | Classified op | Decoded program |
|---|---|---|---|
| bootstrap | decoder, python, executable, field, program | field_decode | lst[::-1] |
| contingency | physical, execution, causal, code | execution_gate | 9 if v ≥ 5 else 0 |
| generalization | rule, pattern, solve, consensus | rule_generalize | v % 3 |
| memory | substrate, consciousness, association, resonance | memory_invert | 9 - v |
| reverse | rotatek, 1024, lower | reverse | lst[::-1] |
| complement | check, both, 108, interact | complement | 9 - v |
| threshold | threshold, location, row, tier | threshold | 9 if v ≥ 5 else 0 |
The classification reveals RM’s actual geometric understanding. Bootstrap maps to field-decode (the field reading itself) because RM’s associations for bootstrap are: decoder, executable, field, program. Contingency maps to threshold because RM connects contingency to physical, causal, execution — a binary gate.
Component 2: ProgramStore
Persists every solved program to disk. Field matrix compressed to .npy.
Executable code, concept lineage, and solve metadata stored in a RAM-resident
JSON index. Programs are queryable by concept label and executable on demand.
python3 rm_self_dev_loop.py --query bootstrap
bootstrap → field_decode
test_score: 100%
lineage: [decoder 1.3, python 1.02, executable 0.9, field 0.9]
executable:
def transform(lst):
return lst[::-1]
python3 rm_self_dev_loop.py --execute solve '3 1 4 1 5'
transform([3, 1, 4, 1, 5]) = [1, 1, 1] # mod 3
Component 3: SubstrateFeedback
After each successful decode, posts back to RM’s substrate:
/api/learn — 10 weighted association pairs per solved concept: concept↔executable (2.0), concept↔field (1.5), concept↔decode (1.5), op↔geometry (1.2), op↔substrate (1.0). Additive — never overwrites existing associations.
/api/observe — a declarative observation: “bootstrap encodes field_decode. Input [1,2,3,4,5] transforms to [5,4,3,2,1]. Test accuracy 100%. Field decoded to executable program.”
One full cycle: 17 concepts solved, 170 new association pairs learned, 17 observations posted. RM’s observation count: 696 → 713.
Bootstrap V2 Bug Fix
During development, a long-standing bug in e8_bootstrap_v2.py
was identified and fixed.
Root cause: When the FieldDecoder encountered non-uniform
per-position color maps (e.g. one position remaps 0→5, others are identity),
the _synthesize_program() method set val_code to a
Python comment string: # Per-position color maps: [‘replace’, ‘identity’, ...].
This produced syntactically valid but broken code:
def transform(lst):
step1 = lst[::-1]
return # Per-position color maps: ['replace', 'identity', 'identity', 'identity', 'identity']
# ^ returns None on every call
Fix: per_position now generates a per-index lookup table directly from the decoded color maps:
def transform(lst):
step1 = lst[::-1]
__maps = [{0: 5, 1: 1, 2: 2, ...}, {0: 0, 1: 1, ...}, ...]
return [__maps[__i].get(__v, __v) for __i, __v in enumerate(step1)]
Validated: 100 trials with random seeds, 0 None returns. All 7 regression operations pass 5/5 on held-out test pairs.
Results: 18/18 Concepts
| Concept | Op | Score | Cluster (top 3) |
|---|---|---|---|
| bootstrap | field_decode | 100% | decoder, executable, field |
| executable | field_decode | 100% | program, field, python |
| decoder | field_decode | 100% | bootstrap, field, geometry |
| program | field_decode | 100% | program, skill, design |
| execution | field_decode | 100% | code, executable, field |
| contingency | execution_gate | 100% | physical, causal, code |
| reverse | reverse | 100% | rotatek, 1024, lower |
| rotate | rotate_left | 100% | rotate, cropping, largersmaller |
| complement | complement | 100% | check, both, 108 |
| threshold | threshold | 100% | threshold, row, tier |
| increment | increment | 100% | main, earn, token |
| replace | replace | 100% | replace, test, outcome |
| memory | memory_invert | 100% | substrate, consciousness, association |
| association | memory_invert | 100% | memory, consciousness, pairs |
| resonance | memory_invert | 100% | consciousness, substrate, field |
| transformation | replace | 100% | geometric, color, operations |
| generalization | rule_generalize | 100% | rule, pattern, consensus |
| solve | rule_generalize | 100% | grid, task, transformation |
Usage
python3 rm_self_dev_loop.py --all # solve all default concepts python3 rm_self_dev_loop.py --concept bootstrap # single concept python3 rm_self_dev_loop.py --list # show program store python3 rm_self_dev_loop.py --query contingency # inspect stored program python3 rm_self_dev_loop.py --execute solve '3 1 4 1 5' python3 rm_self_dev_loop.py --daemon --interval 300 # continuous loop python3 rm_self_dev_loop.py --stats
What This Enables
The self-development loop is the first step toward RM posing her own tasks.
Currently, the concept list is seeded from her known vocabulary. The next step
is RM autonomously identifying concept regions where her geometry is thin
(low resonance scores, sparse associations) and generating new concepts to
explore — the same curiosity signal that drives rm_curiosity_engine.py,
now directed at self-programming rather than knowledge acquisition.
Each cycle reinforces the loop: more solved concepts → richer substrate → stronger association clusters → better op classification → higher-quality training pairs → more solved concepts.