A CRDT library working at the code unit level? Ouch. Of course that’s going to go wrong, it was inevitable.
As for using extended grapheme clusters, it sounds a little bit iffy—maybe possible to use correctly, maybe not, because they’re not stable over time. That style of thing has created some fascinating bugs, like (a few years ago) index corruption in PostgreSQL due to collation changes.
Unicode scalar values are technically-safe: you can’t introduce invalid Unicode. But you can definitely still end up with nonsense.
> We made emoji an atomic node type.
That avoids problems for emoji, but leaves the underlying hazard untouched. I imagine it could still theoretically occur with other text, probably CJK. But probably only theoretically.
> This splits by grapheme clusters rather than code units. No orphaned surrogates, no split emoji. It's what .slice() should have been doing all along, but of course UTF-16 predates emoji by decades.
I do not agree that slice() should operate on extended grapheme clusters. Don’t lump the grapheme cluster/scalar value split in with the sins of UTF-16 and its unreliable code point/code unit split.
UTF-16 was unforced error (and I still can’t work out why it wasn’t obvious from the start that UCS-2 would never be enough). But the concept of multiple scalars contributing to the logical unit was always inevitable.
Just noticed this is getting some traffic! It's a little buried in the post, but I made an interactive tool for exploring surrogate pairs as part of this:
Windows allows unmatched surrogate pairs in filenames, invalid for UTF-16. Likewise, Linux allows invalid UTF-8 byte sequences in filenames.
Because invalid UTF-16 strings could show up in places within Windows, someone made a UTF-8 variant called "WTF-8", which allows unmatched surrogate pairs to survive a round trip.
In summary, Unicode code points (characters) are 32 bit. JavaScript manipulates Unicode in utf-16 for historical reasons, because at some point before Unicode, 16 bit was deemed enough (ucs-2). utf-16 run length encodes Unicode 32 codepoints into one or two code units. Splitting in a middle of a codepoints produces one invalid half string, and one semantically different half string.
emojies are a sequence of Unicode codepoints producing a single grapheme. Splitting in the middle of a grapheme will produce two valid strings, but with some funky half baked emoji. So for a text editor it makes sense to split between grapheme boundaries.
21-bit, actually. It was supposed to be 32-bit, but UTF-16 caps out at 21-bit, so they lopped eleven bits of potential from Unicode (and UTF-8, so no more six-byte encoding).
> at some point before Unicode
No, in the early days of Unicode.
> run length encodes
Um… what? RLE is a data compression thing, UTF-16 has nothing to do with it.
Once I ran into this it became hard to treat strings “normally” in any situation or, alternatively, I’d force hard encoding requirements in the domain. Regardless, handling grapheme clusters properly is hard and easy to get wrong.
I recently ported a program from python to rust and the original author used string regexes. Input and output document encoding mattered but the characters that needed to be matched were always lower ASCII. The python program could have used binary regexes, but instead forced an input encoding (UTF-8) and made the user choose an output encoding. When the input comes from an unknown process or legacy data, however, you don’t always get the luxury of assuming the encoding. Switching to binary regexes and ignoring encoding altogether simplified logic, eliminated classes of errors, and made the program work in scenarios it couldn’t earlier. Getting rid of the last decoding/encoding code gave me so much relief, especially when all of the whacky encoding tests I had already written continued to work.
You are reminding me we also circled an issue at one point where a backend system in Python needed to agree on the same character count length of a piece of content was the client (JavaScript). Another place Intl.Segmenter would've helped.
If I'm remembering correctly, we briefly explored a solution where we told Python "This is a UTF-16LE encoded string" so the count would match, but I think we learned/realized the endianness is actually dictated by the client's machine (Going from memory here). Ultimately we just changed the solution so the client was the source of truth about lengths and counts.
These threads are surfacing all kinds of things I forgot about and didn't add in that blog post. Maybe I need to write another, haha.
I had an emoji cut in half problem in Dart. I was a bit surprised because I thought substring operations worked on characters. It only caused an invalid Unicode symbol though so not too bad.
> It would have been expensive, but all characters should have been fixed size 64bit values
You're making the same mistake that numerous people made before you: thinking that it's as simple as using arrays of large enough numbers. First they thought that two bytes per symbol would be enough, then four. Spoiler alert: it wasn't. And eight won't work either.
> It would have been expensive, but all characters should have been fixed size 64bit values.
It would have been a non-starter, and then we'd all be dealing with Shift-JIS, BIG5, and FSM knows how many different codepages to this day. UTF-8 is about as elegant as it gets, though Java and JS still managed to fuck that up too (they both encode every codepoint outside the BMP as surrogate pairs in UTF-8)
it's good to know about surrogate pairs in unicode. It was new to me too when being part of tracking down incomplete unicode flags in the (excellent) phanpy mastodon client.
My recollection (that I didn't add to the story): I don't think Intl.Segmenter had great browser support then (2022). Even if it had it still wasn't a quick/obvious fix for our problem with where it was occurring in our stack. But I do remember looking at it then.
Modern string libraries largely use UTF-8 [0], and surrogates, regardless of whether they’re paired, are invalid in UTF-8. So, in a modern string library, as built in to most modern languages, you will not encounter surrogates except when translating between encodings.
[0] But everyone disagrees as to what indexing a string means, so you need to make an actual choice if you want anything involving indexing to match across languages.
> surrogates, regardless of whether they’re paired, are invalid in UTF-8
Java did not get the memo. Since the char type is fixed at 16 bits, it uses surrogates to encode everything outside the BMP, regardless of the encoding.
No, the language did not handle it fine. It allowed an invalid Unicode string to exist. This is basically a UTF-16 affliction—nothing that does UTF-16 validates, whereas almost everything that does UTF-8 does validate. encodeURIComponent deals with UTF-8, so of course it throws.
A CRDT library working at the code unit level? Ouch. Of course that’s going to go wrong, it was inevitable.
As for using extended grapheme clusters, it sounds a little bit iffy—maybe possible to use correctly, maybe not, because they’re not stable over time. That style of thing has created some fascinating bugs, like (a few years ago) index corruption in PostgreSQL due to collation changes.
Unicode scalar values are technically-safe: you can’t introduce invalid Unicode. But you can definitely still end up with nonsense.
> We made emoji an atomic node type.
That avoids problems for emoji, but leaves the underlying hazard untouched. I imagine it could still theoretically occur with other text, probably CJK. But probably only theoretically.
> This splits by grapheme clusters rather than code units. No orphaned surrogates, no split emoji. It's what .slice() should have been doing all along, but of course UTF-16 predates emoji by decades.
I do not agree that slice() should operate on extended grapheme clusters. Don’t lump the grapheme cluster/scalar value split in with the sins of UTF-16 and its unreliable code point/code unit split.
UTF-16 was unforced error (and I still can’t work out why it wasn’t obvious from the start that UCS-2 would never be enough). But the concept of multiple scalars contributing to the logical unit was always inevitable.
Just noticed this is getting some traffic! It's a little buried in the post, but I made an interactive tool for exploring surrogate pairs as part of this:
- https://george.mand.is/invalid-surrogate-pairs/
I thought it was something that's easier to play with and feel than necessarily just read about.
Windows allows unmatched surrogate pairs in filenames, invalid for UTF-16. Likewise, Linux allows invalid UTF-8 byte sequences in filenames.
Because invalid UTF-16 strings could show up in places within Windows, someone made a UTF-8 variant called "WTF-8", which allows unmatched surrogate pairs to survive a round trip.
In summary, Unicode code points (characters) are 32 bit. JavaScript manipulates Unicode in utf-16 for historical reasons, because at some point before Unicode, 16 bit was deemed enough (ucs-2). utf-16 run length encodes Unicode 32 codepoints into one or two code units. Splitting in a middle of a codepoints produces one invalid half string, and one semantically different half string.
emojies are a sequence of Unicode codepoints producing a single grapheme. Splitting in the middle of a grapheme will produce two valid strings, but with some funky half baked emoji. So for a text editor it makes sense to split between grapheme boundaries.
> Unicode code points are 32 bit
21-bit, actually. It was supposed to be 32-bit, but UTF-16 caps out at 21-bit, so they lopped eleven bits of potential from Unicode (and UTF-8, so no more six-byte encoding).
> at some point before Unicode
No, in the early days of Unicode.
> run length encodes
Um… what? RLE is a data compression thing, UTF-16 has nothing to do with it.
Once I ran into this it became hard to treat strings “normally” in any situation or, alternatively, I’d force hard encoding requirements in the domain. Regardless, handling grapheme clusters properly is hard and easy to get wrong.
I recently ported a program from python to rust and the original author used string regexes. Input and output document encoding mattered but the characters that needed to be matched were always lower ASCII. The python program could have used binary regexes, but instead forced an input encoding (UTF-8) and made the user choose an output encoding. When the input comes from an unknown process or legacy data, however, you don’t always get the luxury of assuming the encoding. Switching to binary regexes and ignoring encoding altogether simplified logic, eliminated classes of errors, and made the program work in scenarios it couldn’t earlier. Getting rid of the last decoding/encoding code gave me so much relief, especially when all of the whacky encoding tests I had already written continued to work.
You are reminding me we also circled an issue at one point where a backend system in Python needed to agree on the same character count length of a piece of content was the client (JavaScript). Another place Intl.Segmenter would've helped.
If I'm remembering correctly, we briefly explored a solution where we told Python "This is a UTF-16LE encoded string" so the count would match, but I think we learned/realized the endianness is actually dictated by the client's machine (Going from memory here). Ultimately we just changed the solution so the client was the source of truth about lengths and counts.
These threads are surfacing all kinds of things I forgot about and didn't add in that blog post. Maybe I need to write another, haha.
I had an emoji cut in half problem in Dart. I was a bit surprised because I thought substring operations worked on characters. It only caused an invalid Unicode symbol though so not too bad.
Writing property tests on functions that work with strings is a good way to find lots of Unicode issues.
Damn, I’ve never really had to deal with Unicode all that much.
Was already bad enough that instead of bytes, we have to worry about code points. Now even that isn’t enough?
It would have been expensive, but all characters should have been fixed size 64bit values.
> It would have been expensive, but all characters should have been fixed size 64bit values
You're making the same mistake that numerous people made before you: thinking that it's as simple as using arrays of large enough numbers. First they thought that two bytes per symbol would be enough, then four. Spoiler alert: it wasn't. And eight won't work either.
UnicodeV6 - 128 bits per character!
> It would have been expensive, but all characters should have been fixed size 64bit values.
It would have been a non-starter, and then we'd all be dealing with Shift-JIS, BIG5, and FSM knows how many different codepages to this day. UTF-8 is about as elegant as it gets, though Java and JS still managed to fuck that up too (they both encode every codepoint outside the BMP as surrogate pairs in UTF-8)
> Java and JS […] both encode every codepoint outside the BMP as surrogate pairs in UTF-8
I can’t comment on Java, but JS I know reasonably well and I can’t think of any place it uses CESU-8.
That's called CESU-8. https://www.unicode.org/reports/tr26/tr26-4.html
it's good to know about surrogate pairs in unicode. It was new to me too when being part of tracking down incomplete unicode flags in the (excellent) phanpy mastodon client.
Author went for Intl.Segmenter too: https://github.com/cheeaun/phanpy/issues/1491
My recollection (that I didn't add to the story): I don't think Intl.Segmenter had great browser support then (2022). Even if it had it still wasn't a quick/obvious fix for our problem with where it was occurring in our stack. But I do remember looking at it then.
Great write-up. Do most modern languages handle invalid surrogates gracefully, or is it still a "good luck" situation depending on the runtime?
Modern string libraries largely use UTF-8 [0], and surrogates, regardless of whether they’re paired, are invalid in UTF-8. So, in a modern string library, as built in to most modern languages, you will not encounter surrogates except when translating between encodings.
[0] But everyone disagrees as to what indexing a string means, so you need to make an actual choice if you want anything involving indexing to match across languages.
> surrogates, regardless of whether they’re paired, are invalid in UTF-8
Java did not get the memo. Since the char type is fixed at 16 bits, it uses surrogates to encode everything outside the BMP, regardless of the encoding.
The language handled it fine. It will generally just show replacement characters (�) for combos that don't map to anything.
It was really `encodeURIComponent` that didn't handle it gracefully.
If you just type this into the console (surrogate pair for cowboy smiley face emoji), you see it encodes it ("%F0%9F%A4%A0"):
encodeURIComponent("\uD83E\uDD20")
If you give it an invalid surrogate pair, it will throw an actual error:
encodeURIComponent("\uDD20\uD83E")
No, the language did not handle it fine. It allowed an invalid Unicode string to exist. This is basically a UTF-16 affliction—nothing that does UTF-16 validates, whereas almost everything that does UTF-8 does validate. encodeURIComponent deals with UTF-8, so of course it throws.