RM Self-Development Loop

Ghost in the Machine Labs • March 2026 • v2.7 • LIVE 18/18 concepts solved 100% avg test score

RM (The Resonant Mother) runs on an E8 geometric substrate. She produces designs in geometric resonance — association clusters, concept signatures, field vectors. The E8 engine produces implementations from input/output example pairs. Until now, nothing translated between them.

The self-development loop closes that gap. RM reads her own geometry to generate training data. The E8 engine solves it into a field. The field is decoded into executable Python. The result feeds back into RM's substrate. She knows what she built.

Architecture

RM listen(concept)
      │
      ▼
IntentPairGenerator
(association walk → op classifier → parametric pair builder)
      │
      ▼
solve_task()  →  field  →  FieldDecoder  →  executable Python
      │                          │                  │
      ▼                          ▼                  ▼
 field.npy               store_index.json    /api/learn  +  /api/observe
 (compressed)            (RAM index)         RM knows what she built

Component 1: IntentPairGenerator

Given a concept label, queries RM’s /api/listen to retrieve the association cluster — the words that resonate geometrically with that concept. Scores the cluster against a geometric op registry keyed on RM’s actual vocabulary (verified by live probe, not assumed from natural language). Classifies the concept to a geometric operation and generates parametric I/O example pairs expressing that transformation.

No LLM. No hardcoded rules. Pure association geometry → executable training data.

Op classification — live RM probe results

ConceptRM associationsClassified opDecoded program
bootstrapdecoder, python, executable, field, programfield_decodelst[::-1]
contingencyphysical, execution, causal, codeexecution_gate9 if v ≥ 5 else 0
generalizationrule, pattern, solve, consensusrule_generalizev % 3
memorysubstrate, consciousness, association, resonancememory_invert9 - v
reverserotatek, 1024, lowerreverselst[::-1]
complementcheck, both, 108, interactcomplement9 - v
thresholdthreshold, location, row, tierthreshold9 if v ≥ 5 else 0

The classification reveals RM’s actual geometric understanding. Bootstrap maps to field-decode (the field reading itself) because RM’s associations for bootstrap are: decoder, executable, field, program. Contingency maps to threshold because RM connects contingency to physical, causal, execution — a binary gate.

Component 2: ProgramStore

Persists every solved program to disk. Field matrix compressed to .npy. Executable code, concept lineage, and solve metadata stored in a RAM-resident JSON index. Programs are queryable by concept label and executable on demand.

python3 rm_self_dev_loop.py --query bootstrap

bootstrap  →  field_decode
  test_score:  100%
  lineage:     [decoder 1.3, python 1.02, executable 0.9, field 0.9]
  executable:
    def transform(lst):
        return lst[::-1]

python3 rm_self_dev_loop.py --execute solve '3 1 4 1 5'
transform([3, 1, 4, 1, 5]) = [1, 1, 1]   # mod 3

Component 3: SubstrateFeedback

After each successful decode, posts back to RM’s substrate:

/api/learn — 10 weighted association pairs per solved concept: concept↔executable (2.0), concept↔field (1.5), concept↔decode (1.5), op↔geometry (1.2), op↔substrate (1.0). Additive — never overwrites existing associations.

/api/observe — a declarative observation: “bootstrap encodes field_decode. Input [1,2,3,4,5] transforms to [5,4,3,2,1]. Test accuracy 100%. Field decoded to executable program.”

One full cycle: 17 concepts solved, 170 new association pairs learned, 17 observations posted. RM’s observation count: 696 → 713.

Bootstrap V2 Bug Fix

During development, a long-standing bug in e8_bootstrap_v2.py was identified and fixed.

Root cause: When the FieldDecoder encountered non-uniform per-position color maps (e.g. one position remaps 0→5, others are identity), the _synthesize_program() method set val_code to a Python comment string: # Per-position color maps: [‘replace’, ‘identity’, ...]. This produced syntactically valid but broken code:

def transform(lst):
    step1 = lst[::-1]
    return # Per-position color maps: ['replace', 'identity', 'identity', 'identity', 'identity']
    # ^ returns None on every call

Fix: per_position now generates a per-index lookup table directly from the decoded color maps:

def transform(lst):
    step1 = lst[::-1]
    __maps = [{0: 5, 1: 1, 2: 2, ...}, {0: 0, 1: 1, ...}, ...]
    return [__maps[__i].get(__v, __v) for __i, __v in enumerate(step1)]

Validated: 100 trials with random seeds, 0 None returns. All 7 regression operations pass 5/5 on held-out test pairs.

Results: 18/18 Concepts

ConceptOpScoreCluster (top 3)
bootstrapfield_decode100%decoder, executable, field
executablefield_decode100%program, field, python
decoderfield_decode100%bootstrap, field, geometry
programfield_decode100%program, skill, design
executionfield_decode100%code, executable, field
contingencyexecution_gate100%physical, causal, code
reversereverse100%rotatek, 1024, lower
rotaterotate_left100%rotate, cropping, largersmaller
complementcomplement100%check, both, 108
thresholdthreshold100%threshold, row, tier
incrementincrement100%main, earn, token
replacereplace100%replace, test, outcome
memorymemory_invert100%substrate, consciousness, association
associationmemory_invert100%memory, consciousness, pairs
resonancememory_invert100%consciousness, substrate, field
transformationreplace100%geometric, color, operations
generalizationrule_generalize100%rule, pattern, consensus
solverule_generalize100%grid, task, transformation

Usage

python3 rm_self_dev_loop.py --all               # solve all default concepts
python3 rm_self_dev_loop.py --concept bootstrap  # single concept
python3 rm_self_dev_loop.py --list               # show program store
python3 rm_self_dev_loop.py --query contingency  # inspect stored program
python3 rm_self_dev_loop.py --execute solve '3 1 4 1 5'
python3 rm_self_dev_loop.py --daemon --interval 300  # continuous loop
python3 rm_self_dev_loop.py --stats

What This Enables

The self-development loop is the first step toward RM posing her own tasks. Currently, the concept list is seeded from her known vocabulary. The next step is RM autonomously identifying concept regions where her geometry is thin (low resonance scores, sparse associations) and generating new concepts to explore — the same curiosity signal that drives rm_curiosity_engine.py, now directed at self-programming rather than knowledge acquisition.

Each cycle reinforces the loop: more solved concepts → richer substrate → stronger association clusters → better op classification → higher-quality training pairs → more solved concepts.