The problem is that it will have been trained on multiple open source spectrum emulators. Even "don't access the internet" isn't going to help much if it can parrot someone else's emulator verbatim just from training.
Maybe a more sensible challenge would be to describe a system that hasn't previously been emulated before (or had an emulator source released publicly as far as you can tell from the internet) and then try it.
For fun, try using obscure CPUs giving it the same level of specification as you needed for this, or even try an imagined Z80-like but swapping the order of the bits in the encodings and different orderings for the ALU instructions and see how it manages it.
If you did that, comments would be "it's just a bit shuffle of the encodings, of course it can manage that, but how about we do totally random encodings..."
That's true, but I still think it'd be an interesting experiment to see how much it actually follows the specification vs how much it hallucinates by plagiarising from existing code.
Probably bonus points for telling it that you're emulating the well known ZX Spectrum and then describe something entire different and see whether it just treats that name as an arbitrary label, or whether it significantly influences its code generation.
But you're right of course, instruction decoding is a relatively small portion of a CPU that the differences would be quite limited if all the other details remained the same. That's why a completely hypothetical system is better.
There isn't any attempt to falsify the "clean room" claim in the article - a rational approach would be to not provide any documents about the Z80 and the Spectrum, and just ask it to one-shot an emulator and compare the outputs...
If the one-shot output resembles anything working (and I am betting it will), then obviously this isn't clean room at all.
So what you're saying is that it's not just the machine-readable documentation built over decades of the officially undocumented behavior of Z80 opcodes—often provided under restrictive licenses—it's also the "known techniques and patterns" of emulator code—often provided under restrictive licenses.
The "clear room" framing is the most interesting part of this to me. It's essentially using the AI as a way to rebuild something from spec without looking at existing implementations, which has legal implications for emulation but also tells you something about the model's capabilities.
If Claude can produce a working Z80 emulator from documentation alone, it means the model has internalized enough about processor architecture, instruction sets, and timing behavior to reconstruct a functional implementation. That's qualitatively different from pattern-matching on existing emulator code.
From antirez's writing what comes through is that the value isn't "AI wrote my code" — it's that AI made it practical to attempt a project that would have been a multi-week time investment as a weekend experiment. The bottleneck for many side projects isn't knowledge or skill, it's calendar time. Compressing that changes what's worth attempting.
Which is wrong. It's x4-x0. Comment does not match the code below.
> static inline uint16_t zx_pixel_addr(int y, int col) {
It computes a pixel address with 0x4000 added to it only to always subtract 0x4000 from it later. The ZX apparently has ROM at 0x0000..0x3fff necessitating the shift in general but not in this case in particular.
This and the other inline function next to it for attributes are only ever used once.
> During the
> * 192 display scanlines, the ULA fetches screen data for 128 T-states per
> * line.
Yep.. but..
> Instead of a 69,888-byte lookup table
How does that follow? The description completely forgets to mention that it's 192 scan lines + 64+56 border lines * 224 T-States.
I'm bored. This is a pretty muddy implementation. It reminds me of the way children play with Duplo blocks.
> I believe automatic programming to be already super-human, not in the sense it is currently capable of producing code that humans can’t produce, but in the concurrent usage of different programming languages, system programming techniques, DSP stuff, operating system tricks, math, and everything needed to reach the result in the most immediate way.
As HN likes to say, only a amateur vibe-coder could believe this.
The problem is that it will have been trained on multiple open source spectrum emulators. Even "don't access the internet" isn't going to help much if it can parrot someone else's emulator verbatim just from training.
Maybe a more sensible challenge would be to describe a system that hasn't previously been emulated before (or had an emulator source released publicly as far as you can tell from the internet) and then try it.
For fun, try using obscure CPUs giving it the same level of specification as you needed for this, or even try an imagined Z80-like but swapping the order of the bits in the encodings and different orderings for the ALU instructions and see how it manages it.
> try using obscure CPUs
I tried asking Gemini and ChatGPT, "What opcode has the value 0x3c on the Intel 8048?"
They were both wrong. The datasheet with the correct encodings is easily found online. And there are several correct open source emulators, eg MAME.
> try using obscure CPUs
Better still invent a CPU instruction set, and get it to write an emulator for that instruction set in C.
Then invent a C-like HLL and get it to write a compiler from your HLL to your instruction set.
If you did that, comments would be "it's just a bit shuffle of the encodings, of course it can manage that, but how about we do totally random encodings..."
That's true, but I still think it'd be an interesting experiment to see how much it actually follows the specification vs how much it hallucinates by plagiarising from existing code.
Probably bonus points for telling it that you're emulating the well known ZX Spectrum and then describe something entire different and see whether it just treats that name as an arbitrary label, or whether it significantly influences its code generation.
But you're right of course, instruction decoding is a relatively small portion of a CPU that the differences would be quite limited if all the other details remained the same. That's why a completely hypothetical system is better.
There isn't any attempt to falsify the "clean room" claim in the article - a rational approach would be to not provide any documents about the Z80 and the Spectrum, and just ask it to one-shot an emulator and compare the outputs...
If the one-shot output resembles anything working (and I am betting it will), then obviously this isn't clean room at all.
You didn't read the full article.
I had Claude make an quad core 32 bits z80 just for fun.
<https://pastebin.com/Z2b82LHG>
So what you're saying is that it's not just the machine-readable documentation built over decades of the officially undocumented behavior of Z80 opcodes—often provided under restrictive licenses—it's also the "known techniques and patterns" of emulator code—often provided under restrictive licenses.
You use clean room everywhere in the article and clear room in the title. Is this on purpose?
Literally nothing about it is either, either.
Yes for a moment I thought clear room might mean something else for LLMs.
Essentially they can't do clean room anything!
You might as well hire the entire former mid level of a businesses programming team and claim it's clean room work
Windows NT is not VMS! Trust me!
Had to Google this but I do love a deep cut reference!
https://www.itprotoday.com/server-virtualization/windows-nt-...
The "clear room" framing is the most interesting part of this to me. It's essentially using the AI as a way to rebuild something from spec without looking at existing implementations, which has legal implications for emulation but also tells you something about the model's capabilities.
If Claude can produce a working Z80 emulator from documentation alone, it means the model has internalized enough about processor architecture, instruction sets, and timing behavior to reconstruct a functional implementation. That's qualitatively different from pattern-matching on existing emulator code.
From antirez's writing what comes through is that the value isn't "AI wrote my code" — it's that AI made it practical to attempt a project that would have been a multi-week time investment as a weekend experiment. The bottleneck for many side projects isn't knowledge or skill, it's calendar time. Compressing that changes what's worth attempting.
Except the AI was trained - by looking at the implementations? Or do you think Claude never saw the implementations in its training set?
Because that is exceptionally unlikely.
No Carmack or Stallman. Just the right person at the right time.
in spectrum.c
> Address bits for pixel (x, y): > * 010 Y7 Y6 Y2 Y1 Y0 | Y5 Y4 Y3 X7 X6 X5 X4 X3
Which is wrong. It's x4-x0. Comment does not match the code below.
> static inline uint16_t zx_pixel_addr(int y, int col) {
It computes a pixel address with 0x4000 added to it only to always subtract 0x4000 from it later. The ZX apparently has ROM at 0x0000..0x3fff necessitating the shift in general but not in this case in particular.
This and the other inline function next to it for attributes are only ever used once.
> During the > * 192 display scanlines, the ULA fetches screen data for 128 T-states per > * line.
Yep.. but..
> Instead of a 69,888-byte lookup table
How does that follow? The description completely forgets to mention that it's 192 scan lines + 64+56 border lines * 224 T-States.
I'm bored. This is a pretty muddy implementation. It reminds me of the way children play with Duplo blocks.
Did the AI slop write the title too? Clear Room?
> I believe automatic programming to be already super-human, not in the sense it is currently capable of producing code that humans can’t produce, but in the concurrent usage of different programming languages, system programming techniques, DSP stuff, operating system tricks, math, and everything needed to reach the result in the most immediate way.
As HN likes to say, only a amateur vibe-coder could believe this.
It is really quite something how many people that have earned credibility designing well-loved tools seem to be true believers in the AI codswallop.