Specification Language + Evolution Engine

Behaviour emerges from
topology, not code.

Define what an agent can sense and do. Define what success looks like. Evolution discovers the neural wiring that produces intelligent behaviour, without a single line of behavioural logic.

Define Sensors, actuators, and what success looks like
Evolve Thousands of generations of natural selection
Deploy Evolved brains run at microsecond speed

A different paradigm

Traditional AI tells agents what to do. Quale lets them figure it out.

Traditional Approach

def update(agent):
    if agent.health < 0.3:
        flee_to_safety()
    elif agent.hunger > 0.7 and food_nearby:
        if food.is_safe(agent.memory):
            path = a_star(agent.pos, food.pos)
            move_along(path)
            if adjacent(food): eat()
        else:
            avoid(food)
    elif agent.thirst > 0.5:
        find_water()
        drink()
    else:
        patrol(random_waypoint())

    # repeat for every behaviour...
    # combat, trading, fleeing,
    # social, crafting, exploring...
  • Immediate results, no training time
  • Fine-grained control over every decision
  • Predictable - but only for situations you anticipated
  • Every behaviour must be hand-coded
  • Breaks when conditions combine unexpectedly
  • Scales linearly with complexity
vs

Quale Approach

body Forager {
  state health: 0..1 = 1.0
  state hunger: 0..1 = 0.5
  sensor hunger:     internal(0..1)
  sensor food_nearby: directional(range: 20, directions: 4)
  actuator move: directional(threshold: 0.5, directions: 4)
  actuator eat:  trigger(threshold: 0.5)
}

perception ForagerSenses {
  sensor.hunger = agent.hunger
}

action ForagerActions {
  -- Brain outputs drive movement and eating
}

fitness ForagerGoals {
  gate alive
  maximize health: 5.0
  penalize hunger: 2.0
}
  • Zero behavioural code written
  • Handles situations you never anticipated
  • Complexity emerges from selection pressure
  • Unpredictable - naturalistic, but harder to guarantee
  • Requires training time (minutes to hours)
  • Less obvious why a brain makes a decision - but inspectable with tooling
  • Fitness function design takes iteration

When to use Quale

Quale isn't a replacement for traditional AI. It's a different tool for a different class of problem. The best systems use both.

Use Quale when

  • Behaviour is too complex to hand-code
  • You want agents that surprise you, not follow a script
  • The problem has a clear success metric but no clear solution
  • Agents need to adapt to environments they weren't designed for
  • You're modelling emergent or naturalistic behaviour

Use traditional AI when

  • You need guaranteed deterministic behaviour
  • The problem has a known optimal solution
  • Every decision must be traceable to a specific rule
  • Response time must be immediate with no training phase
  • Regulatory compliance requires explainable decisions

Use both together

The most powerful approach combines evolved decision-making with deterministic execution. Quale decides what to do. Traditional code handles how to do it.

Quale Decides to flee, fight, trade, or explore
Traditional A* pathfinding, physics, animation, UI
Quale Learns which signals require braking
Traditional Brake force calculation, speed clamping

What inspired Quale?

In 2024, researchers completed the first full map of a fruit fly's brain: every one of its ~140,000 neurons and the connections between them. When they simulated this wiring digitally, the virtual fly exhibited naturalistic behaviour without any behavioural programming. The structure of the connections alone was enough.

If you faithfully replicate neural topology, behaviour emerges from the structure itself. No rules, no scripts, no decision trees. The wiring is the programme.

Quale applies this principle to artificial agents. Instead of mapping a biological connectome, it evolves one from scratch through natural selection.

Winding, M. et al. (2024). "The connectome of an insect brain." Science, 379(6636). Read the paper

Quale connectome icon

How Quale works

1

Define the body

Specify what your agent can sense (sensors) and do (actuators). This is the interface between brain and world. No behaviour, just capabilities.

2

Define success

Write a fitness function that scores outcomes. "Survive longer = higher score." You define what success looks like, never how to achieve it.

3

Evolve

A population of random neural topologies is evaluated, selected, crossed over, and mutated. Over thousands of generations, behaviour emerges from the wiring.

4

Deploy

The evolved brain runs via signal propagation. Sensor values flow through weighted connections and produce actuator commands. Microseconds per tick.

What does "no behavioural code" actually mean?

To validate the engine, the first test domain was a survival simulation. Agents had to discover how to stay alive on their own. The same engine applies to any domain: game AI, network security, safety modelling, robotics.

Generation 0

The agent has random wiring. It wanders aimlessly, ignores resources, takes no meaningful actions. It fails almost immediately.

Generation 50

Agents whose random wiring happened to produce useful actions survived slightly longer. Their wiring got passed on. The population begins converging on basic strategies.

Generation 200

The population reliably performs the core actions needed to meet the fitness criteria. Nobody programmed this. The wiring was selected for because it scored higher.

Generation 500+

Complex strategies emerge: agents respond to other agents, avoid hazards by observing consequences, and develop behaviours no human designer would have written.

Proof of concept

The engine was validated using a survival simulation as the test domain. Three phases of experiments demonstrated progressively complex emergent behaviour from topology evolution alone.

Phase 1: Foraging from survival pressure

Agents evolved to eat food and drink water purely because not doing so killed them. With consumption rewards completely removed, the behaviour persists. Validated across 10 independent seeds with 787% average fitness improvement.

Phase 2: Sensory food discrimination

When dangerous food was introduced, agents evolved 93% avoidance using colour, smell, and texture signals. A 7:1 safe-to-bad ratio emerged from sickness penalties alone. Fresh evolution outperformed seeded brains, showing prior knowledge can block new learning.

Phase 3: Evolved social instinct

Under complete sensory randomisation (no static cues to exploit), agents evolved to suppress eating near a peer and flee when the peer showed sickness. A 3.4:1 discrimination ratio emerged from social observation alone. All 6 peer sensor connections evolved negative weights: avoid, suppress, flee.

Strategies no human would write

The best Phase 1 brain disconnected its hunger sensor entirely, favouring proactive positioning over reactive eating. The best Phase 3 brain achieved social avoidance with zero hidden nodes: direct stimulus-response wiring analogous to innate alarm responses in biological organisms.

Evolution across phases

Each phase introduced new challenges. The engine discovered new capabilities at each stage.

Challenge: Find food and water to stay alive. No instructions given.

Result: 787% fitness improvement across 10 seeds. Foraging behaviour emerged from survival pressure alone.

Best fitness Average fitness

Applications

Same engine. Different sensors and actuators. Completely different emergent behaviour.

Game AI

NPCs with connectome brains that develop genuine personality through experience. Firefighters that evolve rescue tactics. Civilians that panic differently. No behaviour trees, and every playthrough is unique.

Network Security

IDS/IPS systems that detect zero-day attacks through evolved anomaly recognition rather than signature matching. Graduated responses like throttle, quarantine, and honeypot redirect, evolved for each threat profile.

Robotics

Evolved locomotion and navigation for physical robots. Swarm coordination without central control. Graceful degradation when sensors fail, because the connectome adapts with remaining inputs.

Human Factors

Model how operators behave under fatigue, stress, and time pressure in safety-critical environments. Predict failure modes from compounding factors that single-variable models miss. Discover risk patterns before incidents occur.

The language

Declare topology, not behaviour. Evolve, don't program. Sense and act, don't think.

body.quale
body Firefighter {
  state health: 0..1 = 1.0
  state alive: bool = true
  state civilians_found: int = 0

  sensor heat:          internal(0..1)
  sensor visibility:    internal(0..1)
  sensor structural:    internal(0..1)
  sensor civilian_near: internal(0..1)

  actuator move:    directional(directions: 8)
  actuator breach:  trigger(0.5)
  actuator carry:   trigger(0.5)
  actuator backup:  trigger(0.5)
}
perception.quale
perception RescueSenses {
  -- Map internal state to brain inputs
  sensor.heat = agent.heat
  sensor.visibility = agent.visibility
  sensor.structural = agent.structural
  sensor.civilian_near = agent.civilian_near
}
evolve.quale
evolve rescue_evo {
  body:       Firefighter
  world:      BurningBuilding
  perception: RescueSenses
  action:     RescueActions
  fitness:    RescueGoals

  population:  500
  generations: 10000
  ticks:       500
  seed:        42
}
terminal
$ quale evolve rescue_evo.quale --population 500
[Gen 0001] Best: -340  Avg: -12   Rescue: 0.02
[Gen 0100] Best:  180  Avg:  45   Rescue: 0.34
[Gen 1000] Best:  310  Avg: 120   Rescue: 0.72
[Gen 5000] Best:  385  Avg: 205   Rescue: 0.88
[Gen 8847] Converged. Rescue rate: 0.91
Saved: rescue_gen8847.quale-brain

$ quale run apartment_fire.quale
Running apartment_fire... http://localhost:3000

Why "Quale"?

A quale (plural: qualia) is the subjective, conscious experience of a sensation. The "what it's like" to feel heat, see red, or taste bitter.

Quale the language asks the question at the heart of this paradigm: if behaviour emerges from topology alone, if the right wiring produces navigation, survival, fear, cooperation, does the topology also produce experience?

We don't know. But the question is worth building toward.