[Here, ML means "Meta Language", not "Machine Learning". ML is used as an important building block inside some theorem provers and proof assistants]
The key thing with 1ML is that it merges the core and module system.
The ML family has historically had two systems: core and module. They are stratified in the sense they are separate languages. Modules can contain core expressions, but the other way around isn't possible.
1ML blends module and core. This means you have first-class modules in the core, which leads to a pretty nice language design.
Furthermore, this being Andreas Rossberg, the rigor at which this is carried out is very high. There's proofs of type safety and correctness along the way, generally to the same high bar as Standard ML (SML).
It's not that they can't. It's that it's a waste of time in most cases.
Compilers are moving targets because hardware changes. There's a considerable maintenance upkeep in a compiler.
So if you are interested in programming language semantics, you can opt to skip the compiler part. This lets you iterate language designs without the added baggage of translating said program to machine code.
You can also argue there's no need. If you present your programming language in operational semantics, then it's trivial to write that up as a prolog program and run it on a prolog interpreter. Then you can employ a partial evaluator, and the first Futamura-projection gives you a compiler. You can choose to host your prolog program in a programming language which already has access to a partial evaluator, and you are essentially done before you even started.
I'm someone who has used Prolog in the past, but this is the first time I'm learning of Futamura's work[1]. I knew it was great for building executable grammars, but I hadn't ever really tried to do so thus have absolutely no knowledge on the usual techniques. What an absolutely fascinating methodology, I can see exactly how it maps to Prolog.
It's important to note that not every research area ends up being a surface-language, and oftentimes research projects remain in-progress for a long time. There does exist a freely available research implementation of a 1ML interpreter (though slightly behind the language's formalization) offered by the author:
The thing is that this is a research prototype, not a real compiler. It's not usable in the same degree as a language like SML or Haskell. There is a lot more work beyond a grammar that goes into creating a compiler for a high level language.
As a pragmatic type, I find it endlessly disappointing how many other pragmatic types have absolutely zero familiarity or grounding in even surface level theoretic stuff that academic types are doing.
[Here, ML means "Meta Language", not "Machine Learning". ML is used as an important building block inside some theorem provers and proof assistants]
The key thing with 1ML is that it merges the core and module system.
The ML family has historically had two systems: core and module. They are stratified in the sense they are separate languages. Modules can contain core expressions, but the other way around isn't possible.
1ML blends module and core. This means you have first-class modules in the core, which leads to a pretty nice language design.
Furthermore, this being Andreas Rossberg, the rigor at which this is carried out is very high. There's proofs of type safety and correctness along the way, generally to the same high bar as Standard ML (SML).
> communication barrier between academics who are in a position to discuss 1ML in depth and people who are in a position to write new compilers
I think there is s.th. wrong when people working on type systems can't write compilers.
It's not that they can't. It's that it's a waste of time in most cases.
Compilers are moving targets because hardware changes. There's a considerable maintenance upkeep in a compiler.
So if you are interested in programming language semantics, you can opt to skip the compiler part. This lets you iterate language designs without the added baggage of translating said program to machine code.
You can also argue there's no need. If you present your programming language in operational semantics, then it's trivial to write that up as a prolog program and run it on a prolog interpreter. Then you can employ a partial evaluator, and the first Futamura-projection gives you a compiler. You can choose to host your prolog program in a programming language which already has access to a partial evaluator, and you are essentially done before you even started.
I'm someone who has used Prolog in the past, but this is the first time I'm learning of Futamura's work[1]. I knew it was great for building executable grammars, but I hadn't ever really tried to do so thus have absolutely no knowledge on the usual techniques. What an absolutely fascinating methodology, I can see exactly how it maps to Prolog.
[1] - https://static.aminer.org/pdf/PDF/001/006/665/partial_evalua...
But even a toy compiler would be useful to inspire someone else to pick up the concepts.
It doesn't have to be production grade, just as a communication tool.
It's important to note that not every research area ends up being a surface-language, and oftentimes research projects remain in-progress for a long time. There does exist a freely available research implementation of a 1ML interpreter (though slightly behind the language's formalization) offered by the author:
https://people.mpi-sws.org/~rossberg/1ml/
The thing is that this is a research prototype, not a real compiler. It's not usable in the same degree as a language like SML or Haskell. There is a lot more work beyond a grammar that goes into creating a compiler for a high level language.
I would say we have a problem when people who write compilers can't read type theory papers, but then our backgrounds might differ. ;-)
I kind of agree, as a counterexample I think about Scala.
Martin Odersky I think influenced many other mainstream languages (including Java) that picked up functional concepts and integrated them with OOP.
Pure research is fine, but being right in a vacuum usually ends up reducing the impact and value of the research (or at least postponing it).
Language and compilers are more of an applied part of science, and I think it's best if they're treated more like engineering.
Type systems have many more applications than just compilers.
Academic types are often not interested in practical things and getting their hands dirty.
As a pragmatic type, I find it endlessly disappointing how many other pragmatic types have absolutely zero familiarity or grounding in even surface level theoretic stuff that academic types are doing.
See also: golang
1ML, not 1M
Title is "1ML for non-specialists: introduction".
From the article:
> 1ML is a type system designed by Andreas Rossberg and described in a ollection of papers by him