> Werld drops 30 agents onto a graph with NEAT neural networks that evolve their own topology, 64 sensory channels, continuous motor effectors, and 29 heritable genome traits. communication bandwidth, memory decay, aggression vs cooperation — all evolvable. No hardcoded behaviours, no reward functions. - they could evolve in any direction.
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
"What are you doing?", asked Minsky.
"I am training a randomly wired neural net to play Tic-tac-toe", Sussman replied.
"Why is the net wired randomly?", asked Minsky.
"I do not want it to have any preconceptions of how to play", Sussman said.
Minsky then shut his eyes.
"Why do you close your eyes?" Sussman asked his teacher.
Love the MIT AI Koans. Minsky's actual words to Sussman were "well, it has them, it's just that you don't know what they are." And he's right, the room isn't empty.
Werld's room has walls. The graph topology, energy mechanics, metabolic costs, seasons, those are all design choices. But those are the physics, not the behavior. I chose the laws of nature, not what agents do with them.
Whether they cooperate or attack, broadcast or stay silent, grow complex brains or prune them down, that's selection, not me.
The agents also aren't randomly wired like Sussman's net — they start with minimal NEAT networks and evolve structure through survival. So the preconceptions are there, I just tried to make them physics rather than policy.
Curious how you would approach removing those from an artificial sim like this?
I think it looks fun, but at the same time I really wish you had written the readme yourself and not using an llm. My view: if you can’t be bothered to write it yourself, why should I read it myself?
I like the idea of evolving agents from scratch with no "learning", they just evolve their ability to survive in the environment. Maybe one day it'll be advanced enough to see life evolve.
How does the narrative story generator work?
I played around a bit with NEAT networks, and tried to create a bitcoin trading bot, but the best I could do was a +10% gain over many months. I was hoping for at least 30% each month. Oh well, I guess it doesn't all just depend on past price history.
Thanks! The story generator is pretty simple right now — every 10,000 ticks the sim snapshots population stats, brain complexity, species changes, births/deaths, communication activity and runs it through a template that writes a plain-english chapter.
Building out a more engaging version, and will hopefully stream it onto X again as a story - but this time without chewing api tokens every couple of seconds.
NEAT for trading is interesting - on BTC i used a kernels method that worked quite well and closer to that <2 sharpe on a monthly.
I love emergent behaviour and story telling. Anyone who has played City builders like Sim City or roguelikes like Dwarf Fortress knows how interesting, fun and even informative they can be.
In a world where setting them up and letting rogue agents run rampant becomes relatively low cost and fast, I think focusing on the desired outcomes, the story telling and specially the UX for the human user, is key and maybe we can take some learnings from Will Wright on "Designing User Interfaces to Simulation Games" [1].
I'm going to be unable to do much this weekend so I can't say I'll try check this out (yet?) but I'd be interested in your own experiences so far. Any surprises? Things you'd like to do next? What's most fun/challenging?
An actual report/writeup will probably resonate more than a repo for people who can't check it out easily or are not willing to.
Appreciate this! and yeah the will wright talk is exactly what I was leaning into.
Actually posted this on X 2 weeks ago, hosted the werld observatory public, and had gemini stream a new chapter of the story in natural language every 10,000 ticks - so it felt like reading through a david attenburgh novel of werld being born.
Most interesting thing from the last run was definitely the language and the behaviours, they decoding what they were actually saying was a difficult one, and noticing them group within their diverged species.
Up next, i want to get the storytelling side up and running too - kept running out of storage, and cloudflare playing up as usual - maybe get gemini to visualise each chapter - and get an upgraded interface for the werld observatory.
this reminds me of Polyworld by Larry Yaeger, an artifical life sim where each creature has a vision system. i played around with this back in the early 2000s though the hardware i had access to was basically insufficient to run it in any real way. it's nice to see its development has continued.
Haven't come across Polyworld before — just looked it up and its super cool, especially for 1994. The vision system is a interesting design choice. Werld takes a different approach — graph topology instead of a 2D plane, and NEAT brains instead of Hebbian learning — but the core philosophy is the same.
And yeah hardware has caught up a bit since the early 2000s, though my hard drive is having a hard time. Thanks for the reference, going to dig into Yaeger's papers.
wonder if the black mirror episode was based on polyworld then?
completely agree on the heuristics (someone else mentioned the MIT Koan comment about this). And yeah Plaything is a little too close to home... no QR codes from werld agents yet though. Will keep you posted.
should be starting with 30... if you're seeing 2 that might be an older default that I tried out (an adam and eve experiment). You can change it in the config too.
On the dying immediately thing - offspring get a fraction of the parent's energy when they fork. If the parent forks too early (low energy), the kid spawns with barely anything and can't cover its tick cost + brain metabolic cost.
That's working as intended — reproducing too early is a bad strategy and selection should punish it. But if everything dies instantly, something else might be off.
This actually showed up in the first run, agents that invest more energy into offspring vs ones that fork cheap and fast.
the ones that survived population crashes were the ones passing down leaner, better-inherited brains. Cheap forking works when there's plenty of energy around, falls apart in famine.
> Werld drops 30 agents onto a graph with NEAT neural networks that evolve their own topology, 64 sensory channels, continuous motor effectors, and 29 heritable genome traits. communication bandwidth, memory decay, aggression vs cooperation — all evolvable. No hardcoded behaviours, no reward functions. - they could evolve in any direction.
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
"What are you doing?", asked Minsky.
"I am training a randomly wired neural net to play Tic-tac-toe", Sussman replied.
"Why is the net wired randomly?", asked Minsky.
"I do not want it to have any preconceptions of how to play", Sussman said.
Minsky then shut his eyes.
"Why do you close your eyes?" Sussman asked his teacher.
"So that the room will be empty."
At that moment, Sussman was enlightened.
Love the MIT AI Koans. Minsky's actual words to Sussman were "well, it has them, it's just that you don't know what they are." And he's right, the room isn't empty.
Werld's room has walls. The graph topology, energy mechanics, metabolic costs, seasons, those are all design choices. But those are the physics, not the behavior. I chose the laws of nature, not what agents do with them.
Whether they cooperate or attack, broadcast or stay silent, grow complex brains or prune them down, that's selection, not me.
The agents also aren't randomly wired like Sussman's net — they start with minimal NEAT networks and evolve structure through survival. So the preconceptions are there, I just tried to make them physics rather than policy.
Curious how you would approach removing those from an artificial sim like this?
I think it looks fun, but at the same time I really wish you had written the readme yourself and not using an llm. My view: if you can’t be bothered to write it yourself, why should I read it myself?
completely fair, and thanks for the nudge - expect an updated readme shortly
got the updated version up and again, appreciate the nudge there!
I like the idea of evolving agents from scratch with no "learning", they just evolve their ability to survive in the environment. Maybe one day it'll be advanced enough to see life evolve.
How does the narrative story generator work?
I played around a bit with NEAT networks, and tried to create a bitcoin trading bot, but the best I could do was a +10% gain over many months. I was hoping for at least 30% each month. Oh well, I guess it doesn't all just depend on past price history.
Thanks! The story generator is pretty simple right now — every 10,000 ticks the sim snapshots population stats, brain complexity, species changes, births/deaths, communication activity and runs it through a template that writes a plain-english chapter.
Building out a more engaging version, and will hopefully stream it onto X again as a story - but this time without chewing api tokens every couple of seconds.
NEAT for trading is interesting - on BTC i used a kernels method that worked quite well and closer to that <2 sharpe on a monthly.
I love emergent behaviour and story telling. Anyone who has played City builders like Sim City or roguelikes like Dwarf Fortress knows how interesting, fun and even informative they can be.
In a world where setting them up and letting rogue agents run rampant becomes relatively low cost and fast, I think focusing on the desired outcomes, the story telling and specially the UX for the human user, is key and maybe we can take some learnings from Will Wright on "Designing User Interfaces to Simulation Games" [1].
I'm going to be unable to do much this weekend so I can't say I'll try check this out (yet?) but I'd be interested in your own experiences so far. Any surprises? Things you'd like to do next? What's most fun/challenging?
An actual report/writeup will probably resonate more than a repo for people who can't check it out easily or are not willing to.
- [1] https://donhopkins.medium.com/designing-user-interfaces-to-s...
Appreciate this! and yeah the will wright talk is exactly what I was leaning into.
Actually posted this on X 2 weeks ago, hosted the werld observatory public, and had gemini stream a new chapter of the story in natural language every 10,000 ticks - so it felt like reading through a david attenburgh novel of werld being born.
Most interesting thing from the last run was definitely the language and the behaviours, they decoding what they were actually saying was a difficult one, and noticing them group within their diverged species.
Up next, i want to get the storytelling side up and running too - kept running out of storage, and cloudflare playing up as usual - maybe get gemini to visualise each chapter - and get an upgraded interface for the werld observatory.
If you want to check out my previous attempt at streaming the story line - it's still on my X - https://x.com/im_urav?s=21&t=6Si-w-DvNJC7RfvSz2Aw-w
this reminds me of Polyworld by Larry Yaeger, an artifical life sim where each creature has a vision system. i played around with this back in the early 2000s though the hardware i had access to was basically insufficient to run it in any real way. it's nice to see its development has continued.
https://en.wikipedia.org/wiki/Polyworld
Haven't come across Polyworld before — just looked it up and its super cool, especially for 1994. The vision system is a interesting design choice. Werld takes a different approach — graph topology instead of a 2D plane, and NEAT brains instead of Hebbian learning — but the core philosophy is the same.
And yeah hardware has caught up a bit since the early 2000s, though my hard drive is having a hard time. Thanks for the reference, going to dig into Yaeger's papers.
wonder if the black mirror episode was based on polyworld then?
> No hardcoded behaviours, no reward functions. - they could evolve in any direction.
If they can hack their reward functions won't this always converge on some kind of agentic opium den?
that would be true if there was a reward function. compute_reward() exists in the code, but it returns 0.0.
they're only living/evolving to survive, and fork (reproduce).
can't wirehead natural selection if the brain does nothing useful, they'd die and their genome would die with them.
It is impossible to enforce a world free of heuristics, but this is certainly very cool.
Reminds me of that Black Mirror episode with the circular QR code.
completely agree on the heuristics (someone else mentioned the MIT Koan comment about this). And yeah Plaything is a little too close to home... no QR codes from werld agents yet though. Will keep you posted.
No images in the README...
And stupid leading emojis for the heading.
This seems to start with 2 agents, and then all of their offspring die immediately. Any hints?
should be starting with 30... if you're seeing 2 that might be an older default that I tried out (an adam and eve experiment). You can change it in the config too.
On the dying immediately thing - offspring get a fraction of the parent's energy when they fork. If the parent forks too early (low energy), the kid spawns with barely anything and can't cover its tick cost + brain metabolic cost.
That's working as intended — reproducing too early is a bad strategy and selection should punish it. But if everything dies instantly, something else might be off.
I take that back, I was falling asleep and then suddently had a population spike. Very good!
Arguably a powerful demonstration of why even simple creatures make use of parenting as a strategy to improve the success of their offspring.
This actually showed up in the first run, agents that invest more energy into offspring vs ones that fork cheap and fast.
the ones that survived population crashes were the ones passing down leaner, better-inherited brains. Cheap forking works when there's plenty of energy around, falls apart in famine.
This is one or two steps removed from Thronglets.
hopefully it stays that way.... although I did start setting up a rig to host them on.