If this had been available in 2010, Redis scripting would have been JavaScript and not Lua. Lua was chosen based on the implementation requirements, not on the language ones... (small, fast, ANSI-C). I appreciate certain ideas in Lua, and people love it, but I was never able to like Lua, because it departs from a more Algol-like syntax and semantics without good reasons, for my taste. This creates friction for newcomers. I love friction when it opens new useful ideas and abstractions that are worth it, if you learn SmallTalk or FORTH and for some time you are lost, it's part of how the languages are different. But I think for Lua this is not true enough: it feels like it departs from what people know without good reasons.
> it feels like it departs from what people know without good reasons.
Lua was first released in 1993. I think that it's pretty conventional for the time, though yeah it did not follow Algol syntax but Pascal's and Ada's (which were more popular in Brazil at the time than C, which is why that is the case)!
Ruby, which appeared just 2 years later, departs a lot more, arguably without good reasons either? Perl, which is 5 years older and was very popular at the time, is much more "different" than Lua from what we now consider mainstream.
It wouldn't fix the issue of semantics, but "language skins"[1][2] are an underexplored area of programming language development.
People go through all this effort to separate parsing and lexing, but never exploit the ability to just plug in a different lexer that allows for e.g. "{" and "}" tokens instead of "then" and "end", or vice versa.
Not "never exploit"; Reason and BuckleScript are examples of different "language skins" for OCaml.
The problem with "skins" is that they create variety where people strive for uniformity to lower the cognitive load. OTOH transparent switching between skins (about as easy as changing the tab sizes) would alleviate that.
> OTOH transparent switching between skins (about as easy as changing the tab sizes) would alleviate that.
That's one of my hopes for the future of the industry: people will be able to just choose the code style and even syntax family (which you're calling skin) they prefer when editing code, and it will be saved in whatever is the "default" for the language (or even something like the Unison Language: store the AST directly which allows cool stuff like de-duplicating definitions and content-addressable code - an idea I first found out on the amazing talk by Joe Armstrong, "The mess we're in" [1]).
Rust, in particular, would perhaps benefit a lot given how a lot of people hate its syntax... but also Lua for people who just can't stand the Pascal-like syntax and really need their C-like braces to be happy.
That's precisely the point of using tabs for indentation: you don't need to fight over it, because it's a local display preference that does not affect the source code at all, so everyone can just configure whatever they prefer locally without affecting other people.
The idea of "skins" is apparently to push that even further by abstracting the concrete syntax.
I read this comment, about to snap back with an anecdote how I as a 13 year old was able to learn Lua quite easily, and then I stopped myself because that wasn't productive, then pondered what antirez might think of this comment, and then I realized that antirez wrote it.
This engine restricts JS in all of the ways I wished I could restrict the language back when I was working on JSC.
You can’t restrict JS that way on the web because of compatibility. But I totally buy that restricting it this way for embedded systems will result in something that sparks joy
I bet MQJS will also be very popular. Quite impressive that bro is going to have two JS engines to brag about in addition to a lot of other very useful things!
Since you’re on the topic, what ever happened to the multi threading stuff you were doing on JSC? Did it stop when you left Apple? Is the code still in JSC or did it get taken out?
Well, as Jeff Atwood famously said [0], "any application that can be written in JavaScript, will eventually be written in JavaScript". I guess that applies to embedded systems too
It's unfortunate that he uploaded this without notable commit history, it would be interesting to see how long it takes a programmer of his caliber to bring up a project like this.
That said, judging by the license file this was based on QuickJS anyway, making it a moot comparison.
> It only supports a subset of Javascript close to ES5 [...]
I have not read the code of the solver, but solving YouTube's JS challenge is so demanding that the team behind yt-dlp ditched their JS emulator written in Python.
For all the praise he gets here, few seem interested in his methods: writing complete programs, based on robust computer science, with minimal dependencies and tooling.
When I first read the source for his original QuickJS implementation I was amazed to discover he created the entirety of JavaScript in a single xxx thousand line C file (more or less).
That was a sort of defining moment in my personal coding; a lot of my websites and apps are now single file source wherever possible/practical.
I honestly think the single file thing is best reserved for C, given how bad the language support for modularity is.
I've had the inverse experience dealing with a many thousand line "core.php" file way back in the day helping debug an expressionengine site (back in the php 5.2ish days) and it was awful.
Unless you have an editor which can create short links in a hierarchical tree from semantic comments to let you organize your thoughts, digging through thousands of lines of code all in the same scope can be exceptionally painful.
C has no problems splitting programs in N files, to be honest.
The reason FB (and myself, for what it is worth) often write single file large programs (Redis was split after N years of being a single file) is because with enough programming experience you know one very simple thing: complexity is not about how many files you have, but about the internal structure and conceptually separated modules boundaries.
At some point you mainly split for compilation time and to better orient yourself into the file, instead of having to seek a very large mega-file. Pointing the finger to some program that is well written because it's a single file, strlongly correlates to being not a very expert programmer.
I agree: he loves to "roll your own" a lot. Re: minimal dependencies - the codebase has a software FP implementation including printing and parsing, and some home-rolled math routines for trigonometric and other transcendental functions.
Honestly, it's a reminder that, for the time it takes, it's incredibly fun to build from scratch and understand through-and-through your own system.
Although you have to take detours from, say, writing a bytecode VM, to writing FP printing and parsing routines...
Because he choose the hardest path. Difficult problems, no shortcuts, ambitious, taking time to complete. Our environment in general is the opposite of that.
Fabrice, if you're reading this, please consider replacing Rust instead with your own memory safe language.
The design intent of Rust is a powerful idea, and Rust is the best of its class, but the language itself is under-specified[1] which prevents basic, provably-correct optimizations[0]. At a technical level, Rust could be amended to address these problems, but at a social level, there are now too many people who can block the change, and there's a growing body of backwards compatibility to preserve. This leads reasonable people to give up on Rust and use something else[0], which compounds situations like [2] where projects that need it drop it because it's hard to find people to work on it.
Having written low-level high-performance programs, Fabrice Bellard has the experience to write a memory safe language that allows hardware control. And he has the faculties to assess design changes without tying them up in committee. I covet his attentions in this space.
I think of Rust might trigger a new generation of languages that are developed with the hindsight of rust.
The principle of zero cost abstractions avoids a slow slide of compromising abstraction cost, but I think there could be small cost abstractions that would make for a more pragmatic language. Having Rust to point at to show what performance you could be achieving would aid in avoiding bloating abstractions.
For all the praise he's receiving, I think his web design skills have gone overlooked. bellard.org is fast, responsive and presents information clearly. Actually I think the fancier the website, the shittier the software. Examples: Tarsnap - minimal website, brilliant software. Discord - Whitespacey, animation-heavy abomination of a website. Software: hundreds of MB of JS slop, government wiretap+botnet for degenerates.
I'm not an embedded systems guy (besides using esp32 boards) so this might be a dumb question but does something like this open up the possibility of programming an esp32/arduino board with Javascript, like Micro/Circuit Python?
Sort of related: About ten years ago there was a device called the Tessel by Technical Machine which you programmed with Javascript, npm, the whole nine yards. It was pretty clever - the javascript got transpiled to Lua VM bytecode and ran in the Lua VM on the device (a Cortex M3 I believe). I recently had Claude rewrite their old Node 0.8 CLI tools in Rust because I wasn't inclined to do the javascript archeology needed to get the old tools up and running. Of course then I put the Tessel back in its drawer, but fun nonetheless.
I'm curious what practical purpose you could have for running a js execution engine in an environment that already contains a (substantially faster) js execution engine? Is it just for the joy of doing it (if so good for you, absolutely nothing wrong with that).
WebAssembly also runs in places other than the web, where there isn't a JavaScript interpreter at hand. It'd be nice to have a fast JavaScript engine that integrates inside the WebAssembly sandbox, and can call and be called by other languages targeting WebAssembly.
That way, programs that embed WebAssembly in order to be scriptable can let people use their choice of languages, including JavaScript.
It allows, for example, to create bindings as I did for raylib graphics library. exaequOS can run any program that can be built to WebAssembly It will soon support WASI p1 and p2. So many programming languages will be possible for creating programs targeting exaequOS
Is there not a way to use the browser native js execution environment for that? You lose a non-trivial amount of performance running js inside quickjs inside of wasm vs the browser native js engine. I wouldn't be surprised if that's 10 or even 20 times slower, and of course requires loading more code into the browser (slower startup, more ram usage). Maybe you don't care about that, but all of that is pretty orthogonal to the environments I an embedded engine like this is intended for.
The Turing Award is given for breakthroughs in computer science, not for "most productive programmer of all time", and it wouldn't be appropriate for Ballard.
Being an engineer and coding at this stage/level is just remarkable- sadly this trade craft is missing in most (big?) companies as you get promoted away into oblivion.
If there were some form of "developed contributions to computing" award, his name is definitely up there. I think there could be a need for such an award - for people who reliably have created the foundations of modern computing. Otherwise it's almost always things from an academic context, which can be a little too abstract.
Between ffmpeg and qemu, I always think of https://xkcd.com/2347/ when I see Fabrice's work. Especially since ffmpeg provides the backbone of almost all video streaming systems today.
Anyone know how this compares to Espruino? The target memory footprint is in the same range, at least. (I know very little about the embedded js space, I just use shellyplugs and have them programmed to talk to BLE lightswitches using some really basic Espruino Javascript.)
It looks like if you write the acceptable MQJS subset of JS+types, then run your code through a checker+stripper that doesn't try to inject implementations of TS's enums and such it should just work?
When reading through the projects list of JS restrictions for "stricter" mode, I was expecting to see that it would limit many different JS concepts. But in fact none of the things which are impossible in this subset are things I would do in the course of normal programming anyway. I think all of the JS code I've written over the past few years would work out of the box here.
I was surprised by this one that only showed up lower in the document:
- Date: only Date.now() is supported. [0]
I certainly understand not shipping the js date library especially in an embedded environment both for code-size, and practicality reasons (it's not a great date library), but that would be an issue in many projects (even if you don't use it, libraries yo use almost certainly do.
This would never happen because there's zero incentive to do this.
Browsers are complex because they solve a complex problem: running arbitrary applications in a secure manner across a wide range of platforms. So any "simple" browser you can come up with just won't work in the real world (yes, that means being compatible with websites that normal people use).
I have to disagree, AMP showed that even Google had an internal conflict with the results of WHATWG.. It's naturally quite hard to reach agreements on a subset when many parties will prefer to go backwards to everything but there situations like the first iPhone, ebooks, TV browsing, etc, where normal people buy simpler things and groups that use the simpler subset achieve more in total than those stuck in the complex only format.
(There are even a lot of developers who would inherently drop any feature usage as soon as you can get 10% of users to bring down their stats on caniuse.com to bellow ~90%.)
> that means being compatible with websites that normal people use
No, new adhering websites would emerge and word of mouth would do the rest : normal people would see this fast nerd-web and want rid of their bloated day-to-day monster of a web life.
Just like all those normal people want rid of their bloated day-to-day monster of a web and therefore go and do something like, say, install an ad blocker?
Oh right. 99% of people don't do even that, much less switch their life over to entirely new websites.
In 2025, depending on the study, it is said that 31.5~42.7% of internet users now block ads. Nearly one-third of Americans (32.2%) use ad blockers, with desktop leading at 37%.
I think both wearables and AI assistant could be an incentive on one hand, also towards a more HATEOAS web. However, I guess we haven't really figured out how to replace ad revenue as the primary incentive to make things as complex as possible.
Lots of comments talking about how existing browsers can already do this, but the big benefit that current browsers can't give you is the sheer level of speed and efficiency that a highly restricted "lite web" browser could achieve, especially if the restrictions are made with efficiency in mind.
The embedded use case is obvious, but it'd also be excellent for things like documentation — with such a browser you could probably have a dozen+ doc pages open with resource usage below that of a single regular browser tab. Perfect for things that you have sitting open for long periods of time.
There could be a way:
This HTML-lite spec would be subset of current standard so that if you open this HTML lite page in normal browser it would still work. but HTML-lite browser would only open HTML-lite sites, apart from tech itch it could be used in someplace where not full browser is needed, especially if you are control content generation.
- TV screens UI
- some game engines embed chrome embed thing ( steam store page kind)
- some electron apps / lighter cross platform engine
- less sucky QML
- i think weechat or sth has own xml bashed app froamework thing (so could be useful to people wanting to build everything app app platform
- much richer markdown format ?
Years ago I wrote a tiny xhtml-basic browser for a job. It was great. Some of my best work. But then the iPhone came out and xhtml-basic died practically overnight.
In the earlier days of the web, there were a lot more plugins you'd install to get around on most websites: not just Flash, but things like PDF viewers, Real Video, etc. You'd regularly have to install new codecs, etc. To say nothing of the days when there were some sites you'd have to use a different browser for. A movement towards more of a standards-driven web (in the sense of de facto, not academic, standards) is what made most of this possible.
I think there needs to be a split between the web browser as a document renderer and link follower, and the web browser as a portable target for GUI applications. But frankly my biggest gripe is that you need HTML, JS, and CSS. Three distinct languages that are extremely dissimilar in syntax and semantics that you need all three of (or some bastard cross compiler for your JSX to convert from one format to them). Just make a decent scripting language and interface for the browser and you don't need that nonsense.
I understand this has been tried before (flash, silverlight, etc). They weren't bad ideas, they were killed because of companies that were threatened by the browser as a standard target for applications.
I mean, you can do all that now, so that's not the problem. The problem would be convincing millions of people to switch, when 99.99999% of them couldn't care less.
My idea is to use Markdown over HTTP(S). It's relatively easy to implement Markdown renderer, compared to HTML renderer. It's possible to browse that kind of website with HTML browser with very simple wrapper either on client or server side, so it's backwards compatible. It's rich enough for a lot of websites with actually useful content.
Now I know that Markdown generally can include HTML tags, so probably it should be somewhat restricted.
It could allow to implement second web in a compatible way with simple browsers.
A few too many 9s there I think. You're estimating that only 1 person in every 10 million could care less. So less than 50 such people in the USA for example
But most "apps" are just webviews running overcomplicated websites in them, many of which are using all the crazy features that the GP post wants to strip out.
And, I don't have to run a binary to try your product. The web has a lot of flaws, but it's a good way to deliver properly sandboxed applications with low hassle on the part of the user. I've built my fair share of native vs web apps, and I vastly prefer working on web apps. As a user, I vastly prefer web apps for most things. Not all things, but most. No, I don't want to install your crappy app on my computer and risk you doing something irresponsible. I'll keep you sandboxed in a browser tab that I can easily "uninstall" by closing.
I have several web apps installed over the native alternatives. Discord is the most prominent one; I've found their native app has been getting shittier by the day over recent months, while the web app remains as snappy as any Safari page. Plus I can run an adblocker and other extensions in the web app which improve the experience.
On a phone at the moment so I can't try it out, but in regards to this "stricter mode" it says global variables must be declared with var. I can't tell if that means that's just the only way to declare a global or if not declaring var makes it scoped in this mode. Based on not finding anything skimming through the examples, I assume the former?
it also talks about the global object not being a place to add properties. So how you might do `window.foo = ...` or `globalThis.foo = ...` to make something from the local context into a global object. in this dialect I guess you would have to reserve any global objects you wanted to set with a `var` and then set them by reference eg
// global. initialized by SomeConstructor
var fooInstance
class SomeConstructor {
constructor(...) {
fooInstance = this;
}
static getInstance(...) {
if (fooInstance != null) return fooInstance;
return new SomeConstructor(...);
}
}
People talk about how productive Fabrice Bellard is, but I don't think anyone appreciates just how productive he is.
Here's the commit history for this project
b700a4d (2025-12-22T1420) - Creates an empty project with an MIT license
295a36b (2025-12-22T1432) - Implements the JavaScript engine, the C API, the REPL, and all documentation
He went from zero to a complete JS implementation in just 12 minutes!
I couldn't do that even if you gave me twice as much time.
Okay, but seriously, this is super cool, and I continue to be amazed by Fabrice. I honestly do think it would be interesting to do an analysis of a day or week of Fabrice's commits to see if there's something about his approach that others can apply besides just being a hardworking genius.
That doesn't mean anything. I quite often start with writing a proof-of-concept, and only initialize the git repository when I'm confident the POC will actually lead to something useful. Common sense says that those files already existed at the time of the first commit.
Clarification added later: One of my key interests at the moment is finding ways to run untrusted code from users (or generated by LLMs) in a robust sandbox from a Python application. MicroQuickJS looked like a very strong contender on that front, so I fired up Claude Code to try that out and build some prototypes.
I had Claude Code for web figure out how to run this in a bunch of different ways this morning - I have working prototypes of calling it as a Python FFI library (via ctypes), as a Python compiled module and compiled to WebAssembly and called from Deno and Node.js and Pyodide and Wasmtime https://github.com/simonw/research/blob/main/mquickjs-sandbo...
Down to -4. Is this generic LLM-dislike, or a reaction to perceived over-self-promotion, or something else?
No matter how much you hate LLM stuff I think it's useful to know that there's a working proof of concept of this library compiled to WASM and working as a Python library.
I didn't plan to share this on HN but then MicroQuickJS showed up on the homepage so I figured people might find it useful.
(If I hadn't disclosed I'd used Claude for this I imagine I wouldn't have had any down-votes here.)
Your github research/ links are an interesting case of this. On one hand, late AI adopters may appreciate your example prompts and outputs. But it feels like trivially reproducible noise to expert LLM users, especially if they are unaware of your reputation for substantive work.
The HN AI pushback then drowns out your true message in favor of squashing perceived AI fluff.
This particular case is a very solid use-case for that approach though. There are a ton of important questions to answer: can it run in WebAssembly? What's the difference to regular JavaScript? Is it safe to use as a sandbox against attacks like the regex thing?
Those questions can be answered by having Claude Code crunch along, produce and execute a couple of dozen files of code and report back on the results.
I think the knee-jerk reaction pushing back against this is understandable. I'd encourage people not to miss out on the substance.
And again you're linking to your site. Maybe try pasting the few relevant sentences instead of constantly pushing your content in almost every comment. That's what people find annoying. Maybe link to other people's stuff more, or just write what you think here on HN.
If someone wants to read your blog, they will, they know it exists, and some people even submit your new articles here. There's no need to do what you're doing. Every day you're irritating more people with this behavior, and eventually the substance won't matter to them anymore, so you're acting against your own interests.
Unless you want people to develop the same kind of ad blindness mechanism, where they automatically skip anything that looks like self promotion. Some people will just see a comment by simonw and do the same.
A lot of people have told you this in many threads, but it seems you still don’t get it.
Forget about the AI bit. Do you think it's interesting that MicroQuickJS can be used from Python via FFI or as a compiled module, and can also be compiled to WebAssembly and called from Node.js and Deno and from Pyodide running in a browser?
... and that it provides a useful sandbox in that you can robustly limit both the memory and time allowed, including limiting expensive regular expression evaluation?
I included the AI bit because it would have been dishonest not to disclose how I used AI to figure this all out.
It's interesting but I don't think it belongs as a comment under this post. I can use LLMs to create something tangential for each project posted on HN, and so can everyone else. If we all started doing this then the comment section will quickly become useless and not on point.
Offtopic but I went to your website and saw that you created hackernews-mute and recently I was commenting about how one must have created such an extension and ranted about it. So kudos for you to have created it earlier on.
Have a nice day! Awesome stuff, would keep an eye on your blog, Does your blog/website use mataroa by any chance as there are some similarities even if they are both minimalist but overall nice!
Thank you! I don't use Mataroa but I can see the similarities. My current blog setup is a Python script that parses content written in markdown and emits HTML. The CSS is inspired by the other minimal blogs I see here.
Thanks a lot for checking out my blog/project. Have a great day!
but there is signal in what people are inspired to do upon seeing a new project-- why not simply evaluate the interestingness level of these sorts of mashups on their own terms? it actually feels very "hacker"-y to go out and show people possibilities like this. i have no particular comment on how "interesting" the derivative projects are in this case, but i have a feeling if his original post had been framed more like "i think it's super interesting how easy it is to use via FFI on various runtimes X & Y (oh btw in the spirit of full transparency: i used ai to help me. and you can see more details at <link>). especially because i think everyone who peruses HN with some regularity is likely to know of simon's work in at least some capacity, and i am-- speaking only for myself-- essentially always interested in any sort of project he embarks on, especially his llm-assisted experiments and stuff. but hey-- at the end of the day, all of this value judgment is realized as plainly as possible with +1 / -1 (up- and down-vote) and i guess it just is what it is. if number bad, HN no like. shrug.
I agree that there is signal, and that phrasing his original post as you pointed out would have been better.
My issue is that the cost, in terms of time, for these experiments have really gone down with LLMs. Earlier, if someone played around with the posted project, we knew they spent a reasonable amount of time on it, and thus care about the subject. With LLMs, this is not the case.
Usually I watch your stuff very closely (and positively) because you're pushing the edges of how LLMs can be useful for code (and are a lot more honest/forthwright than most enthusiasts about it Going Horribly Wrong and how much work you need to do to keep on top of it.) This one... looks like a crossbar of random things that don't seem like things anyone would actually want to do? Mentioning the sandboxing bit in the first post would have helped a lot, or anything that said why that particular modes are interesting.
I had been in a similar boat and here are some softwares that I recommend or would discuss with you
https://github.com/libriscv/libriscv (I talked with the author of this project, amazing author fwsgonzo is amazing) and they told me that this has the least latency out of any sandbox at only minor consequence of performance that they know of
Btw for sandboxing, kvm itself feels good too and I had discussed it with them in their discord server when they had mentioned that they were working on a minimal kvm server which has since been open sourced (https://github.com/varnish/tinykvm)
Honestly Simon, Deno hosting/the way deno works is another good interesting tidbit for sandboxing. I wish something like deno's sandboxing capabilities came to python perhaps since python fans can appreciate it.
I will try to look more into your github repository too once I get more free.
Simon although I find it interesting. And I respect you in this field. I still feel like the reason people call out AI usage or downvote in this case is that in my honest opinion, it would be also more interesting to see people actually write the code and more so (maintain) it as well and create a whole community/github project around microquickjs wasm itself
I read this post of yours https://simonwillison.net/2025/Dec/18/code-proven-to-work/ and although there is a point that can be made that what you are doing isn't a job and I myself create prototypes of code using AI, long term (in my opinion) what really matters are the maintainance and claim (like your article says in a way, that I can pin point a person whose responsible for code to work)
If I find any bug right now, I wouldn't blame it on you but AI and I have varying amount of trust on it
My opinion on the matter is that for prototyping AI can be considered good use but long term it definitely isn't and I am sure that you share a similar viewpoint.
Perhaps you can build a blog post about the nuance of AI? I imagine that a lot of people might share a similar aspect of AI policy where its okay to tinker with it. I am part of the new generation and trust be told I don't think that there becomes much incentives long term unless someone realizes things of not using AI because using AI just feels so lucrative for especially the youngsters.
I am 17 years old and I am going to go into a decent college with (might I add immense competition to begin with) when I have passion about such topics but only to get dissuaded because the benchmark of solving assignments etc. are done by AI and the signal ratio of universities themselves are reducing but they haven't reduced to the point that they are irrelevant but rather that you need to have a university to try to get a job but companies have either freezed hiring which some point out with LLM's
If you ask me, Long term to me it feels like more people might associate themselves with hobbyist computing and even using AI (to be honest sort of like pewdiepie) without being in the industry.
I am not sure what the future holds for me (or for any of us as a matter of fact) but I guess the point I am trying to state is that there is nuance to the discussion from both sides
Your tireless experimenting (and especially documenting) is valuable and I love to see all of it. The avant garde nature of your recent work will draw the occasional flurry of disdain from more jaded types, but I doubt many HN regulars would think you had anything but good intentions! Guess I am basically just saying.. keep it up.
I didn't downvote you. You're one of "the AI guys" to me on HN. The content of your post is fine, too, but, even if it was sketch, I'd've given you the benefit of the doubt.
No I mean post it as an HN post and if anybody cares to see it, they'll upvote that and comment in there. That, instead of pigging backing on other posts to get visibility.
What is the purpose of compiling this to web assembly? What web assembly runtimes are there where there is not already an easily accessible (substantially faster) js execution environment? I know wasmtime exists and is not tied to a js execution engine like basically every other web assembly implementation, but the uses of wasmtime are not restricted from dependencies like v8 or jsc. Usually web assembly is used for providing sandboxing something a js execution environment is already designed to provide, and is only used when the code that requires sandboxing is native code not javascript. It sounds like a good way to waste a lot of performance for some additional sandboxing, but I can't imagine why you would ever design a system that way if you could choose a different (already available and higher performance) sandbox.
I want to build features - both client- and server-side - where users can provide JavaScript code that I then execute safely.
Just having a WebAssembly engine available isn't enough for this - something has to take that user-provided string of JavaScript and execute it within a safe sandbox.
Generally that means you need a JavaScript interpreter that has itself been compiled to WebAssembly. I've experimented with QuickJS itself for that in the past - demo here: https://tools.simonwillison.net/quickjs - but MicroQuickJS may be interesting as a smaller alternative.
If this had been available in 2010, Redis scripting would have been JavaScript and not Lua. Lua was chosen based on the implementation requirements, not on the language ones... (small, fast, ANSI-C). I appreciate certain ideas in Lua, and people love it, but I was never able to like Lua, because it departs from a more Algol-like syntax and semantics without good reasons, for my taste. This creates friction for newcomers. I love friction when it opens new useful ideas and abstractions that are worth it, if you learn SmallTalk or FORTH and for some time you are lost, it's part of how the languages are different. But I think for Lua this is not true enough: it feels like it departs from what people know without good reasons.
> it feels like it departs from what people know without good reasons.
Lua was first released in 1993. I think that it's pretty conventional for the time, though yeah it did not follow Algol syntax but Pascal's and Ada's (which were more popular in Brazil at the time than C, which is why that is the case)!
Ruby, which appeared just 2 years later, departs a lot more, arguably without good reasons either? Perl, which is 5 years older and was very popular at the time, is much more "different" than Lua from what we now consider mainstream.
We had a lot problems embedding Ruby in a multithreaded C program as the garbage collector tries to scan memory between the threads (more details here: https://gitlab.com/nbdkit/nbdkit/-/commit/7364cbaae809b5ffb6... )
Perl, Python, OCaml, Lua and Rust were all fine (Rust wasn't around in 2010 of course).
It wouldn't fix the issue of semantics, but "language skins"[1][2] are an underexplored area of programming language development.
People go through all this effort to separate parsing and lexing, but never exploit the ability to just plug in a different lexer that allows for e.g. "{" and "}" tokens instead of "then" and "end", or vice versa.
1. <https://hn.algolia.com/?type=comment&prefix=true&query=cxr%2...>
2. <https://old.reddit.com/r/Oberon/comments/1pcmw8n/is_this_sac...>
Not "never exploit"; Reason and BuckleScript are examples of different "language skins" for OCaml.
The problem with "skins" is that they create variety where people strive for uniformity to lower the cognitive load. OTOH transparent switching between skins (about as easy as changing the tab sizes) would alleviate that.
> OTOH transparent switching between skins (about as easy as changing the tab sizes) would alleviate that.
That's one of my hopes for the future of the industry: people will be able to just choose the code style and even syntax family (which you're calling skin) they prefer when editing code, and it will be saved in whatever is the "default" for the language (or even something like the Unison Language: store the AST directly which allows cool stuff like de-duplicating definitions and content-addressable code - an idea I first found out on the amazing talk by Joe Armstrong, "The mess we're in" [1]).
Rust, in particular, would perhaps benefit a lot given how a lot of people hate its syntax... but also Lua for people who just can't stand the Pascal-like syntax and really need their C-like braces to be happy.
[1] https://www.youtube.com/watch?v=lKXe3HUG2l4
People fight about tab sizes all the time though.
That's precisely the point of using tabs for indentation: you don't need to fight over it, because it's a local display preference that does not affect the source code at all, so everyone can just configure whatever they prefer locally without affecting other people.
The idea of "skins" is apparently to push that even further by abstracting the concrete syntax.
I read this comment, about to snap back with an anecdote how I as a 13 year old was able to learn Lua quite easily, and then I stopped myself because that wasn't productive, then pondered what antirez might think of this comment, and then I realized that antirez wrote it.
Out of interest, was Tcl considered? It's the original embeddable language.
Not to mention the 1-based indexing sin. JavaScript has a lot of WTFs but they got that right at least.
Does it count as 0-indexing when your 0 is a floating point number?
If you can't deal with off-by-one errors, you're not a programmer.
Except for Date.
This engine restricts JS in all of the ways I wished I could restrict the language back when I was working on JSC.
You can’t restrict JS that way on the web because of compatibility. But I totally buy that restricting it this way for embedded systems will result in something that sparks joy
He already has a JS engine which doesn’t make these restrictions
Yeah QuickJS is great.
I bet MQJS will also be very popular. Quite impressive that bro is going to have two JS engines to brag about in addition to a lot of other very useful things!
Since you’re on the topic, what ever happened to the multi threading stuff you were doing on JSC? Did it stop when you left Apple? Is the code still in JSC or did it get taken out?
I never really started on it other than writing up how to do it
Well, as Jeff Atwood famously said [0], "any application that can be written in JavaScript, will eventually be written in JavaScript". I guess that applies to embedded systems too
[0] https://en.wikipedia.org/wiki/Jeff_Atwood
It's unfortunate that he uploaded this without notable commit history, it would be interesting to see how long it takes a programmer of his caliber to bring up a project like this.
That said, judging by the license file this was based on QuickJS anyway, making it a moot comparison.
It does say "public repository of..." implying there's a non-public one with real history. Not sure why not upload the main one though.
Maybe he just oneshotted it
Maybe claude code uses bellard as agent
Claude is really Bellard sitting in his kitchen, sipping coffee, casually replying to code requests while getting ready for his day.
This explains a lot. Opus 4.5 gives you access to Bellard after his second cup of coffee.
A sort of reverse code golf where he doesn't care what he sends as all that code will never touch his prod code bases.
I wonder if this could become the most lightweight way for yt-dlp to solve YouTube Javascript challenges.
https://github.com/yt-dlp/yt-dlp/wiki/EJS
Not likely:
> It only supports a subset of Javascript close to ES5 [...]
I have not read the code of the solver, but solving YouTube's JS challenge is so demanding that the team behind yt-dlp ditched their JS emulator written in Python.
Likely not, given that it only implements ES5.
Fabrice Bellard is widely considered one of the most productive and versatile programmers alive:
- FFmpeg: https://bellard.org
- QEMU: https://bellard.org/qemu/
- JSLinux: https://bellard.org/jslinux/
- TCC: https://bellard.org/tcc/
- QuickJS: https://bellard.org/quickjs/
Legendary.
For all the praise he gets here, few seem interested in his methods: writing complete programs, based on robust computer science, with minimal dependencies and tooling.
When I first read the source for his original QuickJS implementation I was amazed to discover he created the entirety of JavaScript in a single xxx thousand line C file (more or less).
That was a sort of defining moment in my personal coding; a lot of my websites and apps are now single file source wherever possible/practical.
I honestly think the single file thing is best reserved for C, given how bad the language support for modularity is.
I've had the inverse experience dealing with a many thousand line "core.php" file way back in the day helping debug an expressionengine site (back in the php 5.2ish days) and it was awful.
Unless you have an editor which can create short links in a hierarchical tree from semantic comments to let you organize your thoughts, digging through thousands of lines of code all in the same scope can be exceptionally painful.
C has no problems splitting programs in N files, to be honest.
The reason FB (and myself, for what it is worth) often write single file large programs (Redis was split after N years of being a single file) is because with enough programming experience you know one very simple thing: complexity is not about how many files you have, but about the internal structure and conceptually separated modules boundaries.
At some point you mainly split for compilation time and to better orient yourself into the file, instead of having to seek a very large mega-file. Pointing the finger to some program that is well written because it's a single file, strlongly correlates to being not a very expert programmer.
This is like Feynman's method for solving hard scientific problems: write down the question, think very hard, write down the answer.
It doesn't necessarily translate to people who are less brilliant.
Yeah, "Step 1: draw 2 circles. Step 2: draw the rest of the fucking owl"
I agree: he loves to "roll your own" a lot. Re: minimal dependencies - the codebase has a software FP implementation including printing and parsing, and some home-rolled math routines for trigonometric and other transcendental functions.
Honestly, it's a reminder that, for the time it takes, it's incredibly fun to build from scratch and understand through-and-through your own system.
Although you have to take detours from, say, writing a bytecode VM, to writing FP printing and parsing routines...
Because he choose the hardest path. Difficult problems, no shortcuts, ambitious, taking time to complete. Our environment in general is the opposite of that.
He's also won the International Obfuscated C Code Contest 3 times.
https://www.ioccc.org/authors.html#Fabrice_Bellard
He's also built a closed-source LLM inference engine, which he's been maintaining since the GPT-2 days: https://bellard.org/ts_server/ and https://textsynth.com/
I used to play around with Textsynth, but not being OSS killed the appeal for me once llama.cpp came around.
Don't forget his LZEXE from the good old DOS days which was an excellent piece of work at the time.
Self-decompressing executables felt like magic to me at the time. Fantastic work, overall.
Don't forget his LLM based text compression software that won awards.
Guy is a genius. I hope he tries Rust someday
Fabrice, if you're reading this, please consider replacing Rust instead with your own memory safe language.
The design intent of Rust is a powerful idea, and Rust is the best of its class, but the language itself is under-specified[1] which prevents basic, provably-correct optimizations[0]. At a technical level, Rust could be amended to address these problems, but at a social level, there are now too many people who can block the change, and there's a growing body of backwards compatibility to preserve. This leads reasonable people to give up on Rust and use something else[0], which compounds situations like [2] where projects that need it drop it because it's hard to find people to work on it.
Having written low-level high-performance programs, Fabrice Bellard has the experience to write a memory safe language that allows hardware control. And he has the faculties to assess design changes without tying them up in committee. I covet his attentions in this space.
[0]: https://databento.com/blog/why-we-didnt-rewrite-our-feed-han...
[1]: https://blog.polybdenum.com/2024/06/07/the-inconceivable-typ...
[2]: https://daniel.haxx.se/blog/2024/12/21/dropping-hyper/
I think of Rust might trigger a new generation of languages that are developed with the hindsight of rust.
The principle of zero cost abstractions avoids a slow slide of compromising abstraction cost, but I think there could be small cost abstractions that would make for a more pragmatic language. Having Rust to point at to show what performance you could be achieving would aid in avoiding bloating abstractions.
peak hacker news comment lol
Whenever someone says there's no such thing as a 10x programmer, I point them to Fabrice and they usually change their mind.
Perhaps closer to 100x actually.
For real. The GOAT is at it again!
The first two links are broken.
The ffmpeg link was changed apparently, but the QEmu link still works he just redirects to the QEmu homepage.
For all the praise he's receiving, I think his web design skills have gone overlooked. bellard.org is fast, responsive and presents information clearly. Actually I think the fancier the website, the shittier the software. Examples: Tarsnap - minimal website, brilliant software. Discord - Whitespacey, animation-heavy abomination of a website. Software: hundreds of MB of JS slop, government wiretap+botnet for degenerates.
The math checks out.
I'm not an embedded systems guy (besides using esp32 boards) so this might be a dumb question but does something like this open up the possibility of programming an esp32/arduino board with Javascript, like Micro/Circuit Python?
That's been possible with Moddable/Kinoma's XS engine, which is standards compliant with ES6 and beyond.
<https://www.moddable.com/faq#comparison>
If you take a look at the MicroQuickJS README, you can see that it's not a full implementation of even ES5, and it's incompatible in several ways.
Just being able to run JS also isn't going to automatically give you any bindings for the environment.
There are already libraries/frameworks that have supported this:
* espruino (https://www.espruino.com/)
* elk (https://github.com/cesanta/elk)
* DeviceScript (Microsoft Research's now defunct effort, https://github.com/microsoft/devicescript)
Sort of related: About ten years ago there was a device called the Tessel by Technical Machine which you programmed with Javascript, npm, the whole nine yards. It was pretty clever - the javascript got transpiled to Lua VM bytecode and ran in the Lua VM on the device (a Cortex M3 I believe). I recently had Claude rewrite their old Node 0.8 CLI tools in Rust because I wasn't inclined to do the javascript archeology needed to get the old tools up and running. Of course then I put the Tessel back in its drawer, but fun nonetheless.
Yes. The key enabling feature is a lack of malloc()
Timing really is everything for making the frontpage, I posted this last night and it got no traction.
Some other guy tried it as well after you, also no luck.
One strategy is to wait for US to wake up, then post, during their morning.
Other strategy is to post the same thing periodically until there is response.
I easily managed to build quickJS to WebAssembly for running in https://exaequOS.com . So I need to do the same for MicroQuickJS !
I'm curious what practical purpose you could have for running a js execution engine in an environment that already contains a (substantially faster) js execution engine? Is it just for the joy of doing it (if so good for you, absolutely nothing wrong with that).
WebAssembly also runs in places other than the web, where there isn't a JavaScript interpreter at hand. It'd be nice to have a fast JavaScript engine that integrates inside the WebAssembly sandbox, and can call and be called by other languages targeting WebAssembly.
That way, programs that embed WebAssembly in order to be scriptable can let people use their choice of languages, including JavaScript.
It allows, for example, to create bindings as I did for raylib graphics library. exaequOS can run any program that can be built to WebAssembly It will soon support WASI p1 and p2. So many programming languages will be possible for creating programs targeting exaequOS
Is there not a way to use the browser native js execution environment for that? You lose a non-trivial amount of performance running js inside quickjs inside of wasm vs the browser native js engine. I wouldn't be surprised if that's 10 or even 20 times slower, and of course requires loading more code into the browser (slower startup, more ram usage). Maybe you don't care about that, but all of that is pretty orthogonal to the environments I an embedded engine like this is intended for.
If there were a software engineering hall of fame, I nominate Fabrice.
Bellard it the most genius programmer to ever exist, and the least known compared to other pseudo stars.
rare occasion where he gained a legendary status based purely on his work, I dont think I ever saw even a written interview with the guy
He is a private man that does not like the spotlight IIUC. He refuses most requests for interviews, but they do exist.
https://www.macplus.net/depeche-82364-interview-le-createur-...
https://www.mo4tech.com/fabrice-bellard-one-man-is-worth-a-t... (few quotes, more like a profile piece)
He keeps a low profile and let his work speak for itself.
He really is brilliant.
There is! ACM grants several awards for scientists and more.
One such award is the Turing Award [1], given "for contributions of lasting and major technical importance to computer science."
[1] https://en.wikipedia.org/wiki/Turing_Award
Possibly more relevant is the "ACM Software System Award": https://en.wikipedia.org/w/index.php?title=ACM_Software_Syst...
Linux and Torvalds hasn't gotten one?
The Turing Award is given for breakthroughs in computer science, not for "most productive programmer of all time", and it wouldn't be appropriate for Ballard.
His consistency and craftsmanship is amazing.
Being an engineer and coding at this stage/level is just remarkable- sadly this trade craft is missing in most (big?) companies as you get promoted away into oblivion.
If there were some form of "developed contributions to computing" award, his name is definitely up there. I think there could be a need for such an award - for people who reliably have created the foundations of modern computing. Otherwise it's almost always things from an academic context, which can be a little too abstract.
Between ffmpeg and qemu, I always think of https://xkcd.com/2347/ when I see Fabrice's work. Especially since ffmpeg provides the backbone of almost all video streaming systems today.
Except that ffmpeg and qemu are not maintained by Fabrice. He's one of the greatest programmers but he's not maintaining the internet.
Anyone know how this compares to Espruino? The target memory footprint is in the same range, at least. (I know very little about the embedded js space, I just use shellyplugs and have them programmed to talk to BLE lightswitches using some really basic Espruino Javascript.)
This guy is incredible. Lzexe, Qemu, TinyCC to name just a few gems.
Love it! Needing only 10K of RAM, it looks like a much better solution to CircuitPython (can squeeze into 32K).
this would be a killer replacement for micro/circuitpython for embedded devices, assuming there's an elegant TS->MQJS transpile
It looks like if you write the acceptable MQJS subset of JS+types, then run your code through a checker+stripper that doesn't try to inject implementations of TS's enums and such it should just work?
When reading through the projects list of JS restrictions for "stricter" mode, I was expecting to see that it would limit many different JS concepts. But in fact none of the things which are impossible in this subset are things I would do in the course of normal programming anyway. I think all of the JS code I've written over the past few years would work out of the box here.
I was surprised by this one that only showed up lower in the document:
- Date: only Date.now() is supported. [0]
I certainly understand not shipping the js date library especially in an embedded environment both for code-size, and practicality reasons (it's not a great date library), but that would be an issue in many projects (even if you don't use it, libraries yo use almost certainly do.
https://github.com/bellard/mquickjs/blob/main/README.md#:~:t...
I wish for this new year we reboot the Web with a super light standard and accompanying ecosystem with
That would make my year.This would never happen because there's zero incentive to do this.
Browsers are complex because they solve a complex problem: running arbitrary applications in a secure manner across a wide range of platforms. So any "simple" browser you can come up with just won't work in the real world (yes, that means being compatible with websites that normal people use).
I have to disagree, AMP showed that even Google had an internal conflict with the results of WHATWG.. It's naturally quite hard to reach agreements on a subset when many parties will prefer to go backwards to everything but there situations like the first iPhone, ebooks, TV browsing, etc, where normal people buy simpler things and groups that use the simpler subset achieve more in total than those stuck in the complex only format.
(There are even a lot of developers who would inherently drop any feature usage as soon as you can get 10% of users to bring down their stats on caniuse.com to bellow ~90%.)
> that means being compatible with websites that normal people use
No, new adhering websites would emerge and word of mouth would do the rest : normal people would see this fast nerd-web and want rid of their bloated day-to-day monster of a web life.
One can still hope..
Just like all those normal people want rid of their bloated day-to-day monster of a web and therefore go and do something like, say, install an ad blocker?
Oh right. 99% of people don't do even that, much less switch their life over to entirely new websites.
> 99% of people
In 2025, depending on the study, it is said that 31.5~42.7% of internet users now block ads. Nearly one-third of Americans (32.2%) use ad blockers, with desktop leading at 37%.
Wow. That's way higher than I thought. Huh!
It actually gives me hope that we may find a way out of the enshittification of the web.
I don't care to run an ad blocker because sites are still bloated and slow.
I think both wearables and AI assistant could be an incentive on one hand, also towards a more HATEOAS web. However, I guess we haven't really figured out how to replace ad revenue as the primary incentive to make things as complex as possible.
Zero incentive seems a little strong,
Lots of comments talking about how existing browsers can already do this, but the big benefit that current browsers can't give you is the sheer level of speed and efficiency that a highly restricted "lite web" browser could achieve, especially if the restrictions are made with efficiency in mind.
The embedded use case is obvious, but it'd also be excellent for things like documentation — with such a browser you could probably have a dozen+ doc pages open with resource usage below that of a single regular browser tab. Perfect for things that you have sitting open for long periods of time.
That's it. Plus they would work neatly on old computers/phones.
There could be a way: This HTML-lite spec would be subset of current standard so that if you open this HTML lite page in normal browser it would still work. but HTML-lite browser would only open HTML-lite sites, apart from tech itch it could be used in someplace where not full browser is needed, especially if you are control content generation. - TV screens UI - some game engines embed chrome embed thing ( steam store page kind) - some electron apps / lighter cross platform engine - less sucky QML - i think weechat or sth has own xml bashed app froamework thing (so could be useful to people wanting to build everything app app platform - much richer markdown format ?
It’s called WML/WAP
I think we can do better than a 15x15 text window
While we're wishing, can we split CSS into two parts - styling and layout? Also, I'd like to fix the spelling on the "referer" header...
And also bring back progressive enhancement.
https://en.wikipedia.org/wiki/Progressive_enhancement
Do it, man. Call it "MicroWeb" or whatever. Write an agent, make it "viewable with regular browsers". I think this could be cool.
Years ago I wrote a tiny xhtml-basic browser for a job. It was great. Some of my best work. But then the iPhone came out and xhtml-basic died practically overnight.
So you want 2026 to be the year of Google AMP?
What about https://geminiprotocol.net/
I can't think of an instance of the web contracting like that. Maybe when Apple decided not to support Adobe Flash.
In the earlier days of the web, there were a lot more plugins you'd install to get around on most websites: not just Flash, but things like PDF viewers, Real Video, etc. You'd regularly have to install new codecs, etc. To say nothing of the days when there were some sites you'd have to use a different browser for. A movement towards more of a standards-driven web (in the sense of de facto, not academic, standards) is what made most of this possible.
Arguably XSLT
Would be cool to create a MicroBrowser, just to browser stuff that's compatible.
And Microsoftware running on the Micronet.
You mean like the piece of crap that was WAP?
You can already create websites to these standards. Then truncate large parts of webkit and create a new browser. Or base it on Servo.
I think there needs to be a split between the web browser as a document renderer and link follower, and the web browser as a portable target for GUI applications. But frankly my biggest gripe is that you need HTML, JS, and CSS. Three distinct languages that are extremely dissimilar in syntax and semantics that you need all three of (or some bastard cross compiler for your JSX to convert from one format to them). Just make a decent scripting language and interface for the browser and you don't need that nonsense.
I understand this has been tried before (flash, silverlight, etc). They weren't bad ideas, they were killed because of companies that were threatened by the browser as a standard target for applications.
I agree. Something componenty like Flash, yes. But it'd be easier to subset what already exists..
I mean, you can do all that now, so that's not the problem. The problem would be convincing millions of people to switch, when 99.99999% of them couldn't care less.
My idea is to use Markdown over HTTP(S). It's relatively easy to implement Markdown renderer, compared to HTML renderer. It's possible to browse that kind of website with HTML browser with very simple wrapper either on client or server side, so it's backwards compatible. It's rich enough for a lot of websites with actually useful content.
Now I know that Markdown generally can include HTML tags, so probably it should be somewhat restricted.
It could allow to implement second web in a compatible way with simple browsers.
You can just use HTML4 if you want, it's already supported and standardized. Markdown is very much not.
I believe this is the way we might get out of this mess.
With a markdown over HTTP browser I could already almost browse Github through the READMEs and probably other websites.
Markdown is really a loved and now quite popular format. It is sad gemini created a separate closed format instead of just adopting it.
A few too many 9s there I think. You're estimating that only 1 person in every 10 million could care less. So less than 50 such people in the USA for example
Maybe you dont need a big enough % to change but a sufficient absolute number, which given internet size might happen with the right 0.00001%
Oh they would care if one shows them much snappier versions of services they use. They just don't know better.
And if you find you need more features than that - just build an app, don't make the web browser into some overly bloated app!
But most "apps" are just webviews running overcomplicated websites in them, many of which are using all the crazy features that the GP post wants to strip out.
Then you have to deal with os compatibility. That's the main selling point of the Web, it works everywhere.
And, I don't have to run a binary to try your product. The web has a lot of flaws, but it's a good way to deliver properly sandboxed applications with low hassle on the part of the user. I've built my fair share of native vs web apps, and I vastly prefer working on web apps. As a user, I vastly prefer web apps for most things. Not all things, but most. No, I don't want to install your crappy app on my computer and risk you doing something irresponsible. I'll keep you sandboxed in a browser tab that I can easily "uninstall" by closing.
I can't think of a single thing where I prefer a web app over a native alternative, unless it's for one-off use.
I have several web apps installed over the native alternatives. Discord is the most prominent one; I've found their native app has been getting shittier by the day over recent months, while the web app remains as snappy as any Safari page. Plus I can run an adblocker and other extensions in the web app which improve the experience.
Most of the “apps” are 200 MB native monstrosities that could be served by 20 kb of JS.
Except when it doesn't because of browser or platform differences/incompatibilities.
The portability of the Web is imperfect, but it's not even in the same galaxy as the portability of native app platforms; there's just no comparison.
I wonder when does he have time to do those marvellous things
On a phone at the moment so I can't try it out, but in regards to this "stricter mode" it says global variables must be declared with var. I can't tell if that means that's just the only way to declare a global or if not declaring var makes it scoped in this mode. Based on not finding anything skimming through the examples, I assume the former?
it also talks about the global object not being a place to add properties. So how you might do `window.foo = ...` or `globalThis.foo = ...` to make something from the local context into a global object. in this dialect I guess you would have to reserve any global objects you wanted to set with a `var` and then set them by reference eg
I'm guessing the use of undeclared variables result in an error, instead of implicitly creating a global variable.
I think it means you can't assign to unbounded names, you must either declare with var for global, or let/const for local
People talk about how productive Fabrice Bellard is, but I don't think anyone appreciates just how productive he is.
Here's the commit history for this project
b700a4d (2025-12-22T1420) - Creates an empty project with an MIT license
295a36b (2025-12-22T1432) - Implements the JavaScript engine, the C API, the REPL, and all documentation
He went from zero to a complete JS implementation in just 12 minutes!
I couldn't do that even if you gave me twice as much time.
Okay, but seriously, this is super cool, and I continue to be amazed by Fabrice. I honestly do think it would be interesting to do an analysis of a day or week of Fabrice's commits to see if there's something about his approach that others can apply besides just being a hardworking genius.
It's funny how many people replying here just got whooshed. This comment is satire; they don't actually think Bellard wrote everything in 12 minutes.
Doesn't say much. Probably had it largely written down and put it together. I don't think it'd even be humanely possible to do that in 12 minutes.
That doesn't mean anything. I quite often start with writing a proof-of-concept, and only initialize the git repository when I'm confident the POC will actually lead to something useful. Common sense says that those files already existed at the time of the first commit.
I find frustrating that imagination of very smart, talented and capable people is captured by JavaScript language, surely they could have done better.
Clarification added later: One of my key interests at the moment is finding ways to run untrusted code from users (or generated by LLMs) in a robust sandbox from a Python application. MicroQuickJS looked like a very strong contender on that front, so I fired up Claude Code to try that out and build some prototypes.
I had Claude Code for web figure out how to run this in a bunch of different ways this morning - I have working prototypes of calling it as a Python FFI library (via ctypes), as a Python compiled module and compiled to WebAssembly and called from Deno and Node.js and Pyodide and Wasmtime https://github.com/simonw/research/blob/main/mquickjs-sandbo...
PR and prompt I used here: https://github.com/simonw/research/pull/50 - using this pattern: https://simonwillison.net/2025/Nov/6/async-code-research/
Down to -4. Is this generic LLM-dislike, or a reaction to perceived over-self-promotion, or something else?
No matter how much you hate LLM stuff I think it's useful to know that there's a working proof of concept of this library compiled to WASM and working as a Python library.
I didn't plan to share this on HN but then MicroQuickJS showed up on the homepage so I figured people might find it useful.
(If I hadn't disclosed I'd used Claude for this I imagine I wouldn't have had any down-votes here.)
I think many subscribe to this philosophy: https://distantprovince.by/posts/its-rude-to-show-ai-output-...
Your github research/ links are an interesting case of this. On one hand, late AI adopters may appreciate your example prompts and outputs. But it feels like trivially reproducible noise to expert LLM users, especially if they are unaware of your reputation for substantive work.
The HN AI pushback then drowns out your true message in favor of squashing perceived AI fluff.
Yeah, I agree that it's rude to show AI output to people... in most cases (and 100% if you don't disclose it.)
My simonw/research GitHub repo is deliberately separate from everything else I do because it's entirely AI-generated. I wrote about that here: https://simonwillison.net/2025/Nov/6/async-code-research/#th...
This particular case is a very solid use-case for that approach though. There are a ton of important questions to answer: can it run in WebAssembly? What's the difference to regular JavaScript? Is it safe to use as a sandbox against attacks like the regex thing?
Those questions can be answered by having Claude Code crunch along, produce and execute a couple of dozen files of code and report back on the results.
I think the knee-jerk reaction pushing back against this is understandable. I'd encourage people not to miss out on the substance.
And again you're linking to your site. Maybe try pasting the few relevant sentences instead of constantly pushing your content in almost every comment. That's what people find annoying. Maybe link to other people's stuff more, or just write what you think here on HN.
If someone wants to read your blog, they will, they know it exists, and some people even submit your new articles here. There's no need to do what you're doing. Every day you're irritating more people with this behavior, and eventually the substance won't matter to them anymore, so you're acting against your own interests.
Unless you want people to develop the same kind of ad blindness mechanism, where they automatically skip anything that looks like self promotion. Some people will just see a comment by simonw and do the same.
A lot of people have told you this in many threads, but it seems you still don’t get it.
It is because you keep over promoting AI almost every day of the week in the HN comments.
In this particular case AI has nothing to do with Fabrice Bellard.
We can have something different on HN like what Fabrice Bellard is up to.
You can continue AI posting as normal in the coming days.
Forget about the AI bit. Do you think it's interesting that MicroQuickJS can be used from Python via FFI or as a compiled module, and can also be compiled to WebAssembly and called from Node.js and Deno and from Pyodide running in a browser?
... and that it provides a useful sandbox in that you can robustly limit both the memory and time allowed, including limiting expensive regular expression evaluation?
I included the AI bit because it would have been dishonest not to disclose how I used AI to figure this all out.
It's interesting but I don't think it belongs as a comment under this post. I can use LLMs to create something tangential for each project posted on HN, and so can everyone else. If we all started doing this then the comment section will quickly become useless and not on point.
Offtopic but I went to your website and saw that you created hackernews-mute and recently I was commenting about how one must have created such an extension and ranted about it. So kudos for you to have created it earlier on.
Maybe we HN users have minds in sync :)
https://news.ycombinator.com/item?id=46359396#46359695
Have a nice day! Awesome stuff, would keep an eye on your blog, Does your blog/website use mataroa by any chance as there are some similarities even if they are both minimalist but overall nice!
Thank you! I don't use Mataroa but I can see the similarities. My current blog setup is a Python script that parses content written in markdown and emits HTML. The CSS is inspired by the other minimal blogs I see here.
Thanks a lot for checking out my blog/project. Have a great day!
I have something like that but as a userscript and with a toggle, it works pretty well for my needs.
Maybe someone finds it useful: https://paste.ubuntu.com/p/rD6Dz7hN2V/
but there is signal in what people are inspired to do upon seeing a new project-- why not simply evaluate the interestingness level of these sorts of mashups on their own terms? it actually feels very "hacker"-y to go out and show people possibilities like this. i have no particular comment on how "interesting" the derivative projects are in this case, but i have a feeling if his original post had been framed more like "i think it's super interesting how easy it is to use via FFI on various runtimes X & Y (oh btw in the spirit of full transparency: i used ai to help me. and you can see more details at <link>). especially because i think everyone who peruses HN with some regularity is likely to know of simon's work in at least some capacity, and i am-- speaking only for myself-- essentially always interested in any sort of project he embarks on, especially his llm-assisted experiments and stuff. but hey-- at the end of the day, all of this value judgment is realized as plainly as possible with +1 / -1 (up- and down-vote) and i guess it just is what it is. if number bad, HN no like. shrug.
I agree that there is signal, and that phrasing his original post as you pointed out would have been better.
My issue is that the cost, in terms of time, for these experiments have really gone down with LLMs. Earlier, if someone played around with the posted project, we knew they spent a reasonable amount of time on it, and thus care about the subject. With LLMs, this is not the case.
Usually I watch your stuff very closely (and positively) because you're pushing the edges of how LLMs can be useful for code (and are a lot more honest/forthwright than most enthusiasts about it Going Horribly Wrong and how much work you need to do to keep on top of it.) This one... looks like a crossbar of random things that don't seem like things anyone would actually want to do? Mentioning the sandboxing bit in the first post would have helped a lot, or anything that said why that particular modes are interesting.
Yeah, I failed completely to explain the context here.
I'm currently on a multi-year side-quest to find safe ways to execute untrusted user-provided code in my Python and web applications.
As such, I pay very close attention to any new language or library that looks like it might be able to provide a robust sandbox.
MicroQuickJS instantly struck me as a strong candidate for that, and initial protoyping has backed that up.
None of that was clear from my original comment.
I had been in a similar boat and here are some softwares that I recommend or would discuss with you
https://github.com/libriscv/libriscv (I talked with the author of this project, amazing author fwsgonzo is amazing) and they told me that this has the least latency out of any sandbox at only minor consequence of performance that they know of
Btw for sandboxing, kvm itself feels good too and I had discussed it with them in their discord server when they had mentioned that they were working on a minimal kvm server which has since been open sourced (https://github.com/varnish/tinykvm)
Honestly Simon, Deno hosting/the way deno works is another good interesting tidbit for sandboxing. I wish something like deno's sandboxing capabilities came to python perhaps since python fans can appreciate it.
I will try to look more into your github repository too once I get more free.
Ah, reading this comment makes your original post 10x more interesting. I guess this is "start with why" in action. :)
It is depressing the age of llm coding power came during python and JavaScript.
Unfortunately it means those languages will be the permanent coding platforms.
Simon although I find it interesting. And I respect you in this field. I still feel like the reason people call out AI usage or downvote in this case is that in my honest opinion, it would be also more interesting to see people actually write the code and more so (maintain) it as well and create a whole community/github project around microquickjs wasm itself
I read this post of yours https://simonwillison.net/2025/Dec/18/code-proven-to-work/ and although there is a point that can be made that what you are doing isn't a job and I myself create prototypes of code using AI, long term (in my opinion) what really matters are the maintainance and claim (like your article says in a way, that I can pin point a person whose responsible for code to work)
If I find any bug right now, I wouldn't blame it on you but AI and I have varying amount of trust on it
My opinion on the matter is that for prototyping AI can be considered good use but long term it definitely isn't and I am sure that you share a similar viewpoint.
I think that AI is so contrasting that there stops existing any nuance. Read my recent comment (although warning, its long) (https://news.ycombinator.com/item?id=46359684)
Perhaps you can build a blog post about the nuance of AI? I imagine that a lot of people might share a similar aspect of AI policy where its okay to tinker with it. I am part of the new generation and trust be told I don't think that there becomes much incentives long term unless someone realizes things of not using AI because using AI just feels so lucrative for especially the youngsters.
I am 17 years old and I am going to go into a decent college with (might I add immense competition to begin with) when I have passion about such topics but only to get dissuaded because the benchmark of solving assignments etc. are done by AI and the signal ratio of universities themselves are reducing but they haven't reduced to the point that they are irrelevant but rather that you need to have a university to try to get a job but companies have either freezed hiring which some point out with LLM's
If you ask me, Long term to me it feels like more people might associate themselves with hobbyist computing and even using AI (to be honest sort of like pewdiepie) without being in the industry.
I am not sure what the future holds for me (or for any of us as a matter of fact) but I guess the point I am trying to state is that there is nuance to the discussion from both sides
Have a nice day!
Your tireless experimenting (and especially documenting) is valuable and I love to see all of it. The avant garde nature of your recent work will draw the occasional flurry of disdain from more jaded types, but I doubt many HN regulars would think you had anything but good intentions! Guess I am basically just saying.. keep it up.
I didn't downvote you. You're one of "the AI guys" to me on HN. The content of your post is fine, too, but, even if it was sketch, I'd've given you the benefit of the doubt.
I downvoted because I'm tired of people regurgitating how they've done this or that with whatever LLM of the week on seemingly every technical post.
If you care that much, write a blog post and post that, we don't need low effort LLM show and tell all day everyday.
Here you go: https://simonwillison.net/2025/Dec/23/microquickjs/
No I mean post it as an HN post and if anybody cares to see it, they'll upvote that and comment in there. That, instead of pigging backing on other posts to get visibility.
What is the purpose of compiling this to web assembly? What web assembly runtimes are there where there is not already an easily accessible (substantially faster) js execution environment? I know wasmtime exists and is not tied to a js execution engine like basically every other web assembly implementation, but the uses of wasmtime are not restricted from dependencies like v8 or jsc. Usually web assembly is used for providing sandboxing something a js execution environment is already designed to provide, and is only used when the code that requires sandboxing is native code not javascript. It sounds like a good way to waste a lot of performance for some additional sandboxing, but I can't imagine why you would ever design a system that way if you could choose a different (already available and higher performance) sandbox.
I want to build features - both client- and server-side - where users can provide JavaScript code that I then execute safely.
Just having a WebAssembly engine available isn't enough for this - something has to take that user-provided string of JavaScript and execute it within a safe sandbox.
Generally that means you need a JavaScript interpreter that has itself been compiled to WebAssembly. I've experimented with QuickJS itself for that in the past - demo here: https://tools.simonwillison.net/quickjs - but MicroQuickJS may be interesting as a smaller alternative.