Skip to content

Body

The body block defines an agent’s complete state layout and brain interface. It declares three categories of data:

  • State - mutable values that persist across ticks within a scenario (position, speed, counters, flags)
  • Sensors - brain input nodes, computed by the perception block
  • Actuators - brain output nodes, read by the action block

The body also hosts agent-scoped state machines, regions, and plasticity configuration. Every agent.X reference across all other blocks must match a state X declaration in the body.

body Operator {
-- Physical state
state speed: m/s = 0
state position: km = 0
state accel: m/s2 = 0
state alive: bool = true
state reached_end: bool = false
-- Operational state
state stopped_at_waypoint: bool = false
state waypoints_served: int = 0
state idle_ticks: int = 0
state warnings_acknowledged: int = 0
state warnings_missed: int = 0
-- Warning system shared state
state warning_active: bool = false
state warning_acknowledged: bool = false
state last_warning_entity: int = -1
-- Human factors state (modified by dynamics, read by perception)
state fatigue: 0..1 = 0
state stress: 0..1 = 0
state boredom: 0..1 = 0
state cognitive_load: 0..1 = 0
-- Elapsed time
state elapsed: seconds = 0
state ticks_alive: int = 0
-- Brain inputs (computed by perception from agent state)
sensor current_speed: internal(0..1)
sensor distance_ahead: internal(0..1)
sensor warning_level: internal(0..1)
sensor fatigue: internal(0..1)
sensor stress: internal(0..1)
sensor waypoint_near: internal(0..1)
sensor speed_limit: internal(0..1)
sensor boredom: internal(0..1)
sensor cognitive_load: internal(0..1)
-- Brain outputs (read by action)
actuator accelerate: trigger(threshold: 0.3)
actuator brake: trigger(threshold: 0.3)
actuator emergency_stop: trigger(threshold: 0.8)
actuator acknowledge: trigger(threshold: 0.5)
-- Warning system state machine
machine WarningSystem {
scope: agent
initial: clear
state clear {}
state alert {
on_enter {
agent.warning_active = true
agent.stress += 0.02
}
}
state acknowledged {
on_enter {
agent.warning_active = false
agent.warning_acknowledged = true
agent.warnings_acknowledged += 1
}
}
state failed {
on_enter {
agent.warning_active = false
agent.stress += 0.1
agent.warnings_missed += 1
}
}
transition clear -> alert:
when agent.warning_active and not agent.warning_acknowledged
transition alert -> acknowledged:
when actuator.acknowledge > 0.5
transition alert -> failed:
when elapsed_in_state > 5.0
transition acknowledged -> clear:
when elapsed_in_state > 1.0
transition failed -> clear:
when elapsed_in_state > 10.0
}
}

Agent state is declared with a type annotation and initial value. All state is mutable and persists across ticks within a single scenario. State resets between genomes.

-- From Human Factors
state speed: m/s = 0
state position: km = 0
state alive: bool = true
state fatigue: 0..1 = 0
state warnings_acknowledged: int = 0
-- From Network Security
state threat_score: 0..1 = 0
state is_threat: bool = false
-- From Survival
state health: 0..1 = 1.0
state position_x: int = 7
TypeStorageDescription
floatfloat64General-purpose floating point
0..1float64Float clamped to [0.0, 1.0] by the dynamics clamp directive
intfloat64Conceptually integer; stored as float64
boolfloat64true = 1.0, false = 0.0
stringint (enum index)Interned at compile time. Used for categorical state like death_cause

Unit annotations (m/s, km, seconds, km/h, m/s2) are documentation-only. The compiler does not enforce unit consistency - state speed: m/s = 0 and state speed: float = 0 are equivalent at runtime.

The engine checks agent.alive at the top of each tick and skips dead agents. All other lifecycle states are pure .quale state with no special engine behavior.


Sensors define the brain’s input nodes. Sensor values are computed by the perception block - the body declares what sensors exist and their type, while perception computes their values each tick.

-- Internal state sensors - one brain input node each (Human Factors)
sensor current_speed: internal(0..1)
sensor fatigue: internal(0..1)
sensor warning_level: internal(0..1)
-- Internal state sensors (Network Security)
sensor rate: internal(0..1)
sensor entropy: internal(0..1)
sensor alert_level: internal(0..1)
-- Directional sensors - expands to N brain input nodes (Survival)
sensor food_nearby: directional(range: 20, directions: 4)
-- creates: food_nearby_n, food_nearby_e, food_nearby_s, food_nearby_w
sensor water_nearby: directional(range: 20, directions: 4)
TypeSyntaxBrain NodesDescription
Internalinternal(0..1)1Agent’s own state value, clamped to range
Directionaldirectional(range: N, directions: 4)4N/S/E/W distance detection
Directionaldirectional(range: N, directions: 8)8N/NE/E/SE/S/SW/W/NW distance detection
Item Propertyitem_property(field)1Observable property of the nearest entity
Socialsocial(field)1Peer agent’s visible state (requires agents: 2)

Actuators define the brain’s output nodes. Actuator outputs are read by the action block which interprets them through physics and domain logic.

-- Trigger actuators - fire when activation exceeds threshold (Human Factors)
actuator accelerate: trigger(threshold: 0.3)
actuator brake: trigger(threshold: 0.3)
actuator emergency_stop: trigger(threshold: 0.8)
actuator acknowledge: trigger(threshold: 0.5)
-- Trigger actuators with uniform thresholds (Network Security)
actuator allow: trigger(threshold: 0.5)
actuator throttle: trigger(threshold: 0.5)
actuator block: trigger(threshold: 0.5)
actuator escalate: trigger(threshold: 0.7)
-- Directional actuator - winner-take-all direction selection (Survival)
actuator move: directional(threshold: 0.5, directions: 4)
actuator eat: trigger(threshold: 0.5)
actuator drink: trigger(threshold: 0.5)
TypeSyntaxBrain NodesDescription
Directionaldirectional(threshold: F, directions: 4)4Winner-take-all direction selection
Triggertrigger(threshold: F)1Fires when activation exceeds threshold

Note: The threshold value is stored in the compiled project and available to the action block and agent machines. The engine returns raw actuator output values; the action block interprets thresholds per its own logic.

Note: Parameters must be named. Write trigger(threshold: 0.5) not trigger(0.5).


Agent-scoped state machines are declared inside the body block. They run at step 5 of the tick loop (after the action block), so they can read current-tick actuator outputs. Machines handle state transitions only - the behavioral consequences of those state changes are enforced by when guards in the action block on the next tick.

See the machine block reference for full syntax. Agent machines use scope: agent.

The WarningSystem machine (Human Factors) models a timed alert that the operator must acknowledge before a deadline. The AlertEscalation machine (Network Security) models an IDS pipeline: monitoring, investigating, blocking, and cooldown. The Survival demo does not use agent machines - simple reflex-based agents can operate with just perception and action blocks.

machine WarningSystem {
scope: agent
initial: clear
state clear {}
state alert {
on_enter {
agent.warning_active = true
agent.stress += 0.02
}
}
state acknowledged {
on_enter {
agent.warning_active = false
agent.warning_acknowledged = true
agent.warnings_acknowledged += 1
}
}
state failed {
on_enter {
agent.warning_active = false
agent.stress += 0.1
agent.warnings_missed += 1
}
}
transition clear -> alert:
when agent.warning_active and not agent.warning_acknowledged
transition alert -> acknowledged:
when actuator.acknowledge > 0.5
transition alert -> failed:
when elapsed_in_state > 5.0
transition acknowledged -> clear:
when elapsed_in_state > 1.0
transition failed -> clear:
when elapsed_in_state > 10.0
}

Agent machines (scope: agent) can read and write agent state, read actuator outputs, read world state (read-only), and call spatial queries. They cannot write world state.


Regions give your agent’s brain structure before evolution begins. Instead of starting with an empty brain and hoping evolution builds useful groupings, you pre-define clusters of neurons with different properties - fast binary reflexes, slow graded reasoning, state tracking. Evolution still wires everything together, but it starts with a structured foundation rather than a blank slate.

Regions define clusters of hidden neurons inside a body. They give structure to the evolved brain by grouping neurons with shared properties - a specific activation function, internal connectivity density, and optional recurrence. Without regions, the evolution engine starts with a direct input-to-output topology and grows hidden neurons one at a time. With regions, the initial genome already contains structured hidden layers.

Regions are declared inside body blocks, after sensors and actuators.

body Agent {
sensor energy: internal(0..1)
sensor hunger: internal(0..1)
actuator move: directional(threshold: 0.5, directions: 4)
actuator eat: trigger(threshold: 0.3)
region reflex {
nodes: 8
density: 0.6
activation: step
recurrent: false
}
region planning {
nodes: 12
density: 0.4
activation: sigmoid
recurrent: false
}
}
FieldTypeRequiredDescription
nodesintegeryesNumber of hidden neurons in this region
densityfloatyesInternal connectivity density in [0.0, 1.0]. A value of 1.0 means fully connected within the region; 0.0 means no intra-region connections
activationidentifieryesActivation function for all neurons in the region
recurrentbooleanyesWhether connections within the region may form cycles
NameDescription
sigmoidS-curve, output in (0, 1)
tanhHyperbolic tangent, output in (-1, 1)
reluRectified linear, output in [0, inf)
leaky_reluLeaky rectified linear, small negative slope
stepBinary threshold, output is 0 or 1
gaussianBell curve centered at 0
linearIdentity function, output equals input
softplusSmooth approximation of ReLU
  • Initial topology: Each region’s neurons are pre-allocated in the initial genome. Intra-region connections are created at the specified density. Sparse connections (~10%) link inputs to region nodes and region nodes to outputs.
  • Structural mutations: When evolution adds a new hidden node (via the add_node mutation), it inherits a region assignment from neighboring nodes. New connections preferentially stay within the same region (80% of add_connection attempts try intra-region first).
  • Homeostatic regulation: When combined with a plasticity block containing a homeostatic sub-block, each region tracks the fraction of active neurons and adjusts a modulatory gain to maintain the target activity level.
  • Region names are contextual identifiers - they only need to be unique within the body
  • Multiple regions are allowed per body
  • A body with zero regions is valid; the initial genome starts with direct input-to-output wiring
  • Region names do not appear in the evolve block - they are part of the body definition

Plasticity lets an agent’s brain change during its lifetime, not just between generations. Without plasticity, a brain is fixed once it’s born - it can only improve through evolution across generations. With plasticity, connections strengthen when they’re useful and weaken when they’re not, letting the agent adapt within a single scenario. This is the difference between instinct (evolved) and learning (plastic).

Plasticity enables runtime weight adaptation during an agent’s lifetime. Connection weights in the evolved brain can change during simulation, not just between generations. This allows agents to learn within a single scenario rather than relying entirely on evolutionary selection.

Plasticity is declared inside body blocks and contains up to three independently optional sub-blocks.

body Learner {
sensor energy: internal(0..1)
actuator act: trigger(threshold: 0.5)
plasticity {
hebbian {
rate: 0.01
max_weight: 2.0
}
decay {
rate: 0.001
min_weight: 0.0
}
homeostatic {
target_activity: 0.3
adjustment_rate: 0.005
}
}
}

Hebbian learning is the simplest form of neural learning: “neurons that fire together wire together.” When two connected neurons are both active at the same time, the connection between them gets stronger. This means the brain reinforces pathways that are actually being used during the simulation.

hebbian {
rate: 0.01 -- weight update magnitude per tick
max_weight: 2.0 -- absolute ceiling for weights (symmetric: [-2.0, 2.0])
}

Strengthens connections between co-active neurons (“neurons that fire together wire together”). Each tick, when both the source and target of a connection are active (output > 0.1), the connection weight increases by rate * source_output * target_output. Weights are clamped to [-max_weight, max_weight].

Weight decay is the opposite of Hebbian learning - connections that aren’t being used gradually weaken toward zero. This prevents the brain from accumulating useless connections and keeps it lean. Think of it as “use it or lose it.”

decay {
rate: 0.001 -- multiplicative decay factor per tick
min_weight: 0.0 -- absolute floor below which weights snap to zero
}

Gradually reduces the weight of inactive connections toward zero. Connections that carry active signal resist decay via an activity trace. This prevents runaway weight growth and prunes connections that are not contributing to the agent’s behavior.

Homeostatic regulation prevents regions from going silent or exploding with activity. It’s like a thermostat for each brain region - if too many neurons are firing, it dampens them; if too few are active, it amplifies signals. This keeps the brain in a productive operating range.

homeostatic {
target_activity: 0.3 -- desired fraction of active neurons per region
adjustment_rate: 0.005 -- gain adaptation speed
}

Maintains stable activity levels within each region by adjusting a per-region modulatory gain. When a region’s average activity exceeds the target, the gain decreases (dampening signals). When activity falls below the target, the gain increases (amplifying signals). The gain is clamped to [0.1, 3.0] to prevent runaway modulation.

Homeostatic regulation requires regions to be defined in the body. Without regions, the homeostatic sub-block has no effect.

  • All three sub-blocks are independently optional - you can use any combination
  • The plasticity block itself is optional; omitting it means static weights (no runtime learning)
  • Plasticity operates during simulation ticks, after signal propagation and before actuator output reading
  • The evolved genome determines the initial weights; plasticity adapts them during an agent’s lifetime
  • Plasticity changes persist within an evaluation (across scenarios) but reset between genomes. A single brain instance is built per genome evaluation, so weight adaptations from earlier scenarios carry into later ones within the same evaluation.

All keywords and constructs available inside a body block.

state <name>: <type> = <initial_value>
ComponentRequiredDescription
stateYesKeyword introducing the declaration
<name>YesIdentifier for the state variable
<type>YesType annotation (see table below)
<initial_value>YesValue at scenario start
AnnotationStorageRangeDescription
floatfloat64UnboundedGeneral-purpose floating point
intfloat64UnboundedConceptually integer; no fractional enforcement at runtime
boolfloat640.0 or 1.0Boolean; true = 1.0, false = 0.0
0..1float64[0.0, 1.0]Bounded float; enforced by dynamics clamp 0..1 if present
stringint (enum)N/AInterned at compile time; stored as an enum index
secondsfloat64UnboundedUnit annotation (documentation only)
m/sfloat64UnboundedUnit annotation (documentation only)
m/s2float64UnboundedUnit annotation (documentation only)
kmfloat64UnboundedUnit annotation (documentation only)
km/hfloat64UnboundedUnit annotation (documentation only)
sensor <name>: <type>(<parameters>)
TypeSyntaxBrain NodesParametersDescription
Internalinternal(0..1)1Range annotationAgent state value, clamped to the declared range
Directional (4-way)directional(range: N, directions: 4)4range: detection radius; directions: 4N/S/E/W distance detection. Expands to _n, _e, _s, _w sub-sensors
Directional (8-way)directional(range: N, directions: 8)8range: detection radius; directions: 8N/NE/E/SE/S/SW/W/NW detection. Expands to 8 sub-sensors
Item Propertyitem_property(field)1field: property nameObservable property of the nearest entity of the related type
Socialsocial(field)1field: state namePeer agent’s visible state. Requires agents: 2 in evolve block

Parameters must be named: internal(0..1), directional(range: 20, directions: 4).

actuator <name>: <type>(<parameters>)
TypeSyntaxBrain NodesParametersDescription
Triggertrigger(threshold: F)1threshold: activation thresholdSingle output node. Read via actuator.<name> in action block. Threshold is a hint; the action block interprets it
Directional (4-way)directional(threshold: F, directions: 4)4threshold: activation threshold; directions: 4Winner-take-all direction. Expands to _n, _e, _s, _w sub-actuators

Parameters must be named: trigger(threshold: 0.5), directional(threshold: 0.5, directions: 4).

machine <Name> {
scope: agent
initial: <state_name>
state <name> { ... }
state <name> {
on_enter { <statements> }
on_exit { <statements> }
}
transition <from> -> <to>:
when <condition>
}
ConstructRequiredDescription
scopeYesagent for body machines, world for world machines
initialNoStarting state. Defaults to the first declared state
stateYes (at least one)Named state with optional per-tick body, on_enter, and on_exit handlers
transitionNoTransition rule evaluated after all state logic each tick
elapsed_in_stateBuilt-inSeconds since entering the current state (resets on transition)
timerBuilt-inPer-machine local variable, initialized to 0

Transition conditions have access to the full expression language. Inside state handlers, the full statement language is available (let, when, assignments, record, consume).