Body
The body block defines an agent’s complete state layout and brain interface. It declares three categories of data:
- State - mutable values that persist across ticks within a scenario (position, speed, counters, flags)
- Sensors - brain input nodes, computed by the perception block
- Actuators - brain output nodes, read by the action block
The body also hosts agent-scoped state machines, regions, and plasticity configuration. Every agent.X reference across all other blocks must match a state X declaration in the body.
body Operator { -- Physical state state speed: m/s = 0 state position: km = 0 state accel: m/s2 = 0 state alive: bool = true state reached_end: bool = false
-- Operational state state stopped_at_waypoint: bool = false state waypoints_served: int = 0 state idle_ticks: int = 0 state warnings_acknowledged: int = 0 state warnings_missed: int = 0
-- Warning system shared state state warning_active: bool = false state warning_acknowledged: bool = false state last_warning_entity: int = -1
-- Human factors state (modified by dynamics, read by perception) state fatigue: 0..1 = 0 state stress: 0..1 = 0 state boredom: 0..1 = 0 state cognitive_load: 0..1 = 0
-- Elapsed time state elapsed: seconds = 0 state ticks_alive: int = 0
-- Brain inputs (computed by perception from agent state) sensor current_speed: internal(0..1) sensor distance_ahead: internal(0..1) sensor warning_level: internal(0..1) sensor fatigue: internal(0..1) sensor stress: internal(0..1) sensor waypoint_near: internal(0..1) sensor speed_limit: internal(0..1) sensor boredom: internal(0..1) sensor cognitive_load: internal(0..1)
-- Brain outputs (read by action) actuator accelerate: trigger(threshold: 0.3) actuator brake: trigger(threshold: 0.3) actuator emergency_stop: trigger(threshold: 0.8) actuator acknowledge: trigger(threshold: 0.5)
-- Warning system state machine machine WarningSystem { scope: agent initial: clear
state clear {}
state alert { on_enter { agent.warning_active = true agent.stress += 0.02 } }
state acknowledged { on_enter { agent.warning_active = false agent.warning_acknowledged = true agent.warnings_acknowledged += 1 } }
state failed { on_enter { agent.warning_active = false agent.stress += 0.1 agent.warnings_missed += 1 } }
transition clear -> alert: when agent.warning_active and not agent.warning_acknowledged
transition alert -> acknowledged: when actuator.acknowledge > 0.5
transition alert -> failed: when elapsed_in_state > 5.0
transition acknowledged -> clear: when elapsed_in_state > 1.0
transition failed -> clear: when elapsed_in_state > 10.0 }}body Sentinel { -- Detection state state threat_score: 0..1 = 0 state confidence: 0..1 = 0 state alive: bool = true state position: km = 0
-- Counters state connections_seen: int = 0 state threats_blocked: int = 0 state threats_missed: int = 0 state false_positives: int = 0 state true_positives: int = 0 state ticks_alive: int = 0
-- Current connection properties (set by world/perception) state packet_rate: 0..1 = 0 state payload_entropy: 0..1 = 0 state syn_ratio: 0..1 = 0 state connection_age: 0..1 = 0 state is_threat: bool = false
-- Brain inputs sensor rate: internal(0..1) sensor entropy: internal(0..1) sensor syn: internal(0..1) sensor age: internal(0..1) sensor alert_level: internal(0..1)
-- Brain outputs actuator allow: trigger(threshold: 0.5) actuator throttle: trigger(threshold: 0.5) actuator block: trigger(threshold: 0.5) actuator escalate: trigger(threshold: 0.7)
-- Alert escalation state machine machine AlertEscalation { scope: agent initial: monitoring
state monitoring { agent.confidence *= 0.99 }
state investigating { on_enter { agent.confidence += 0.01 agent.threat_score += 0.02 } }
state blocking { on_enter { agent.threat_score = 1.0 } }
state cooldown { on_enter { agent.threat_score *= 0.95 agent.confidence *= 0.98 } }
transition monitoring -> investigating: when actuator.escalate > 0.7
transition investigating -> blocking: when actuator.block > 0.5 and elapsed_in_state > 1.0
transition investigating -> monitoring: when elapsed_in_state > 10.0 and actuator.block <= 0.5
transition blocking -> cooldown: when elapsed_in_state > 5.0
transition cooldown -> monitoring: when elapsed_in_state > 3.0 }}body Forager { -- Vital state state health: 0..1 = 1.0 state hunger: 0..1 = 0.5 state thirst: 0..1 = 0.5 state energy: 0..1 = 0.8 state nausea: 0..1 = 0 state alive: bool = true
-- Position on the grid state position_x: int = 7 state position_y: int = 7
-- Counters state food_eaten: int = 0 state water_drunk: int = 0 state ticks_alive: int = 0 state idle_ticks: int = 0
-- Brain inputs (computed from state by perception) sensor hunger: internal(0..1) sensor thirst: internal(0..1) sensor energy: internal(0..1) sensor health: internal(0..1) sensor nausea: internal(0..1) sensor food_nearby: directional(range: 20, directions: 4) sensor water_nearby: directional(range: 20, directions: 4)
-- Brain outputs actuator move: directional(threshold: 0.5, directions: 4) actuator eat: trigger(threshold: 0.5) actuator drink: trigger(threshold: 0.5)}State Declarations
Section titled “State Declarations”Agent state is declared with a type annotation and initial value. All state is mutable and persists across ticks within a single scenario. State resets between genomes.
-- From Human Factorsstate speed: m/s = 0state position: km = 0state alive: bool = truestate fatigue: 0..1 = 0state warnings_acknowledged: int = 0
-- From Network Securitystate threat_score: 0..1 = 0state is_threat: bool = false
-- From Survivalstate health: 0..1 = 1.0state position_x: int = 7| Type | Storage | Description |
|---|---|---|
float | float64 | General-purpose floating point |
0..1 | float64 | Float clamped to [0.0, 1.0] by the dynamics clamp directive |
int | float64 | Conceptually integer; stored as float64 |
bool | float64 | true = 1.0, false = 0.0 |
string | int (enum index) | Interned at compile time. Used for categorical state like death_cause |
Unit Annotations
Section titled “Unit Annotations”Unit annotations (m/s, km, seconds, km/h, m/s2) are documentation-only. The compiler does not enforce unit consistency - state speed: m/s = 0 and state speed: float = 0 are equivalent at runtime.
The alive State
Section titled “The alive State”The engine checks agent.alive at the top of each tick and skips dead agents. All other lifecycle states are pure .quale state with no special engine behavior.
Sensors
Section titled “Sensors”Sensors define the brain’s input nodes. Sensor values are computed by the perception block - the body declares what sensors exist and their type, while perception computes their values each tick.
-- Internal state sensors - one brain input node each (Human Factors)sensor current_speed: internal(0..1)sensor fatigue: internal(0..1)sensor warning_level: internal(0..1)
-- Internal state sensors (Network Security)sensor rate: internal(0..1)sensor entropy: internal(0..1)sensor alert_level: internal(0..1)
-- Directional sensors - expands to N brain input nodes (Survival)sensor food_nearby: directional(range: 20, directions: 4)-- creates: food_nearby_n, food_nearby_e, food_nearby_s, food_nearby_wsensor water_nearby: directional(range: 20, directions: 4)Sensor Types
Section titled “Sensor Types”| Type | Syntax | Brain Nodes | Description |
|---|---|---|---|
| Internal | internal(0..1) | 1 | Agent’s own state value, clamped to range |
| Directional | directional(range: N, directions: 4) | 4 | N/S/E/W distance detection |
| Directional | directional(range: N, directions: 8) | 8 | N/NE/E/SE/S/SW/W/NW distance detection |
| Item Property | item_property(field) | 1 | Observable property of the nearest entity |
| Social | social(field) | 1 | Peer agent’s visible state (requires agents: 2) |
Actuators
Section titled “Actuators”Actuators define the brain’s output nodes. Actuator outputs are read by the action block which interprets them through physics and domain logic.
-- Trigger actuators - fire when activation exceeds threshold (Human Factors)actuator accelerate: trigger(threshold: 0.3)actuator brake: trigger(threshold: 0.3)actuator emergency_stop: trigger(threshold: 0.8)actuator acknowledge: trigger(threshold: 0.5)
-- Trigger actuators with uniform thresholds (Network Security)actuator allow: trigger(threshold: 0.5)actuator throttle: trigger(threshold: 0.5)actuator block: trigger(threshold: 0.5)actuator escalate: trigger(threshold: 0.7)
-- Directional actuator - winner-take-all direction selection (Survival)actuator move: directional(threshold: 0.5, directions: 4)actuator eat: trigger(threshold: 0.5)actuator drink: trigger(threshold: 0.5)Actuator Types
Section titled “Actuator Types”| Type | Syntax | Brain Nodes | Description |
|---|---|---|---|
| Directional | directional(threshold: F, directions: 4) | 4 | Winner-take-all direction selection |
| Trigger | trigger(threshold: F) | 1 | Fires when activation exceeds threshold |
Note: The threshold value is stored in the compiled project and available to the action block and agent machines. The engine returns raw actuator output values; the action block interprets thresholds per its own logic.
Note: Parameters must be named. Write trigger(threshold: 0.5) not trigger(0.5).
Machines
Section titled “Machines”Agent-scoped state machines are declared inside the body block. They run at step 5 of the tick loop (after the action block), so they can read current-tick actuator outputs. Machines handle state transitions only - the behavioral consequences of those state changes are enforced by when guards in the action block on the next tick.
See the machine block reference for full syntax. Agent machines use scope: agent.
The WarningSystem machine (Human Factors) models a timed alert that the operator must acknowledge before a deadline. The AlertEscalation machine (Network Security) models an IDS pipeline: monitoring, investigating, blocking, and cooldown. The Survival demo does not use agent machines - simple reflex-based agents can operate with just perception and action blocks.
machine WarningSystem { scope: agent initial: clear
state clear {}
state alert { on_enter { agent.warning_active = true agent.stress += 0.02 } }
state acknowledged { on_enter { agent.warning_active = false agent.warning_acknowledged = true agent.warnings_acknowledged += 1 } }
state failed { on_enter { agent.warning_active = false agent.stress += 0.1 agent.warnings_missed += 1 } }
transition clear -> alert: when agent.warning_active and not agent.warning_acknowledged
transition alert -> acknowledged: when actuator.acknowledge > 0.5
transition alert -> failed: when elapsed_in_state > 5.0
transition acknowledged -> clear: when elapsed_in_state > 1.0
transition failed -> clear: when elapsed_in_state > 10.0}machine AlertEscalation { scope: agent initial: monitoring
state monitoring { agent.confidence *= 0.99 }
state investigating { on_enter { agent.confidence += 0.01 agent.threat_score += 0.02 } }
state blocking { on_enter { agent.threat_score = 1.0 } }
state cooldown { on_enter { agent.threat_score *= 0.95 agent.confidence *= 0.98 } }
transition monitoring -> investigating: when actuator.escalate > 0.7
transition investigating -> blocking: when actuator.block > 0.5 and elapsed_in_state > 1.0
transition investigating -> monitoring: when elapsed_in_state > 10.0 and actuator.block <= 0.5
transition blocking -> cooldown: when elapsed_in_state > 5.0
transition cooldown -> monitoring: when elapsed_in_state > 3.0}Scope Visibility
Section titled “Scope Visibility”Agent machines (scope: agent) can read and write agent state, read actuator outputs, read world state (read-only), and call spatial queries. They cannot write world state.
Regions
Section titled “Regions”Regions give your agent’s brain structure before evolution begins. Instead of starting with an empty brain and hoping evolution builds useful groupings, you pre-define clusters of neurons with different properties - fast binary reflexes, slow graded reasoning, state tracking. Evolution still wires everything together, but it starts with a structured foundation rather than a blank slate.
Regions define clusters of hidden neurons inside a body. They give structure to the evolved brain by grouping neurons with shared properties - a specific activation function, internal connectivity density, and optional recurrence. Without regions, the evolution engine starts with a direct input-to-output topology and grows hidden neurons one at a time. With regions, the initial genome already contains structured hidden layers.
Regions are declared inside body blocks, after sensors and actuators.
body Agent { sensor energy: internal(0..1) sensor hunger: internal(0..1) actuator move: directional(threshold: 0.5, directions: 4) actuator eat: trigger(threshold: 0.3)
region reflex { nodes: 8 density: 0.6 activation: step recurrent: false }
region planning { nodes: 12 density: 0.4 activation: sigmoid recurrent: false }}Fields
Section titled “Fields”| Field | Type | Required | Description |
|---|---|---|---|
nodes | integer | yes | Number of hidden neurons in this region |
density | float | yes | Internal connectivity density in [0.0, 1.0]. A value of 1.0 means fully connected within the region; 0.0 means no intra-region connections |
activation | identifier | yes | Activation function for all neurons in the region |
recurrent | boolean | yes | Whether connections within the region may form cycles |
Activation Functions
Section titled “Activation Functions”| Name | Description |
|---|---|
sigmoid | S-curve, output in (0, 1) |
tanh | Hyperbolic tangent, output in (-1, 1) |
relu | Rectified linear, output in [0, inf) |
leaky_relu | Leaky rectified linear, small negative slope |
step | Binary threshold, output is 0 or 1 |
gaussian | Bell curve centered at 0 |
linear | Identity function, output equals input |
softplus | Smooth approximation of ReLU |
How Regions Affect Evolution
Section titled “How Regions Affect Evolution”- Initial topology: Each region’s neurons are pre-allocated in the initial genome. Intra-region connections are created at the specified density. Sparse connections (~10%) link inputs to region nodes and region nodes to outputs.
- Structural mutations: When evolution adds a new hidden node (via the
add_nodemutation), it inherits a region assignment from neighboring nodes. New connections preferentially stay within the same region (80% ofadd_connectionattempts try intra-region first). - Homeostatic regulation: When combined with a
plasticityblock containing ahomeostaticsub-block, each region tracks the fraction of active neurons and adjusts a modulatory gain to maintain the target activity level.
- Region names are contextual identifiers - they only need to be unique within the body
- Multiple regions are allowed per body
- A body with zero regions is valid; the initial genome starts with direct input-to-output wiring
- Region names do not appear in the
evolveblock - they are part of the body definition
Plasticity
Section titled “Plasticity”Plasticity lets an agent’s brain change during its lifetime, not just between generations. Without plasticity, a brain is fixed once it’s born - it can only improve through evolution across generations. With plasticity, connections strengthen when they’re useful and weaken when they’re not, letting the agent adapt within a single scenario. This is the difference between instinct (evolved) and learning (plastic).
Plasticity enables runtime weight adaptation during an agent’s lifetime. Connection weights in the evolved brain can change during simulation, not just between generations. This allows agents to learn within a single scenario rather than relying entirely on evolutionary selection.
Plasticity is declared inside body blocks and contains up to three independently optional sub-blocks.
body Learner { sensor energy: internal(0..1) actuator act: trigger(threshold: 0.5)
plasticity { hebbian { rate: 0.01 max_weight: 2.0 } decay { rate: 0.001 min_weight: 0.0 } homeostatic { target_activity: 0.3 adjustment_rate: 0.005 } }}Hebbian Learning
Section titled “Hebbian Learning”Hebbian learning is the simplest form of neural learning: “neurons that fire together wire together.” When two connected neurons are both active at the same time, the connection between them gets stronger. This means the brain reinforces pathways that are actually being used during the simulation.
hebbian { rate: 0.01 -- weight update magnitude per tick max_weight: 2.0 -- absolute ceiling for weights (symmetric: [-2.0, 2.0])}Strengthens connections between co-active neurons (“neurons that fire together wire together”). Each tick, when both the source and target of a connection are active (output > 0.1), the connection weight increases by rate * source_output * target_output. Weights are clamped to [-max_weight, max_weight].
Weight Decay
Section titled “Weight Decay”Weight decay is the opposite of Hebbian learning - connections that aren’t being used gradually weaken toward zero. This prevents the brain from accumulating useless connections and keeps it lean. Think of it as “use it or lose it.”
decay { rate: 0.001 -- multiplicative decay factor per tick min_weight: 0.0 -- absolute floor below which weights snap to zero}Gradually reduces the weight of inactive connections toward zero. Connections that carry active signal resist decay via an activity trace. This prevents runaway weight growth and prunes connections that are not contributing to the agent’s behavior.
Homeostatic Regulation
Section titled “Homeostatic Regulation”Homeostatic regulation prevents regions from going silent or exploding with activity. It’s like a thermostat for each brain region - if too many neurons are firing, it dampens them; if too few are active, it amplifies signals. This keeps the brain in a productive operating range.
homeostatic { target_activity: 0.3 -- desired fraction of active neurons per region adjustment_rate: 0.005 -- gain adaptation speed}Maintains stable activity levels within each region by adjusting a per-region modulatory gain. When a region’s average activity exceeds the target, the gain decreases (dampening signals). When activity falls below the target, the gain increases (amplifying signals). The gain is clamped to [0.1, 3.0] to prevent runaway modulation.
Homeostatic regulation requires regions to be defined in the body. Without regions, the homeostatic sub-block has no effect.
- All three sub-blocks are independently optional - you can use any combination
- The
plasticityblock itself is optional; omitting it means static weights (no runtime learning) - Plasticity operates during simulation ticks, after signal propagation and before actuator output reading
- The evolved genome determines the initial weights; plasticity adapts them during an agent’s lifetime
- Plasticity changes persist within an evaluation (across scenarios) but reset between genomes. A single brain instance is built per genome evaluation, so weight adaptations from earlier scenarios carry into later ones within the same evaluation.
Body Block Keyword Reference
Section titled “Body Block Keyword Reference”All keywords and constructs available inside a body block.
State Declaration Syntax
Section titled “State Declaration Syntax”state <name>: <type> = <initial_value>| Component | Required | Description |
|---|---|---|
state | Yes | Keyword introducing the declaration |
<name> | Yes | Identifier for the state variable |
<type> | Yes | Type annotation (see table below) |
<initial_value> | Yes | Value at scenario start |
Complete State Type Reference
Section titled “Complete State Type Reference”| Annotation | Storage | Range | Description |
|---|---|---|---|
float | float64 | Unbounded | General-purpose floating point |
int | float64 | Unbounded | Conceptually integer; no fractional enforcement at runtime |
bool | float64 | 0.0 or 1.0 | Boolean; true = 1.0, false = 0.0 |
0..1 | float64 | [0.0, 1.0] | Bounded float; enforced by dynamics clamp 0..1 if present |
string | int (enum) | N/A | Interned at compile time; stored as an enum index |
seconds | float64 | Unbounded | Unit annotation (documentation only) |
m/s | float64 | Unbounded | Unit annotation (documentation only) |
m/s2 | float64 | Unbounded | Unit annotation (documentation only) |
km | float64 | Unbounded | Unit annotation (documentation only) |
km/h | float64 | Unbounded | Unit annotation (documentation only) |
Complete Sensor Type Reference
Section titled “Complete Sensor Type Reference”sensor <name>: <type>(<parameters>)| Type | Syntax | Brain Nodes | Parameters | Description |
|---|---|---|---|---|
| Internal | internal(0..1) | 1 | Range annotation | Agent state value, clamped to the declared range |
| Directional (4-way) | directional(range: N, directions: 4) | 4 | range: detection radius; directions: 4 | N/S/E/W distance detection. Expands to _n, _e, _s, _w sub-sensors |
| Directional (8-way) | directional(range: N, directions: 8) | 8 | range: detection radius; directions: 8 | N/NE/E/SE/S/SW/W/NW detection. Expands to 8 sub-sensors |
| Item Property | item_property(field) | 1 | field: property name | Observable property of the nearest entity of the related type |
| Social | social(field) | 1 | field: state name | Peer agent’s visible state. Requires agents: 2 in evolve block |
Parameters must be named: internal(0..1), directional(range: 20, directions: 4).
Complete Actuator Type Reference
Section titled “Complete Actuator Type Reference”actuator <name>: <type>(<parameters>)| Type | Syntax | Brain Nodes | Parameters | Description |
|---|---|---|---|---|
| Trigger | trigger(threshold: F) | 1 | threshold: activation threshold | Single output node. Read via actuator.<name> in action block. Threshold is a hint; the action block interprets it |
| Directional (4-way) | directional(threshold: F, directions: 4) | 4 | threshold: activation threshold; directions: 4 | Winner-take-all direction. Expands to _n, _e, _s, _w sub-actuators |
Parameters must be named: trigger(threshold: 0.5), directional(threshold: 0.5, directions: 4).
Machine Declaration Reference
Section titled “Machine Declaration Reference”machine <Name> { scope: agent initial: <state_name>
state <name> { ... } state <name> { on_enter { <statements> } on_exit { <statements> } }
transition <from> -> <to>: when <condition>}| Construct | Required | Description |
|---|---|---|
scope | Yes | agent for body machines, world for world machines |
initial | No | Starting state. Defaults to the first declared state |
state | Yes (at least one) | Named state with optional per-tick body, on_enter, and on_exit handlers |
transition | No | Transition rule evaluated after all state logic each tick |
elapsed_in_state | Built-in | Seconds since entering the current state (resets on transition) |
timer | Built-in | Per-machine local variable, initialized to 0 |
Transition conditions have access to the full expression language. Inside state handlers, the full statement language is available (let, when, assignments, record, consume).