The article sure mentions 💩a lot.
Programming
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities [email protected]
Just give me plain UTF32 with ~@4 billion code points, that really should be enough for any symbol ee can come up with. Give everything it's own code point, no bullshit with combined glyphs that make text processing a nightmare. I need to be able to do a strlen either on byte length or amount of characters without the CPU spendings minute to count each individual character.
I think Unicode started as a great idea and the kind of blubbered into aimless "everybody kinda does what everyone wants" territory. Unicode is for humans, sure, but we shouldn't forget that computers actually have to do the work
I'm personally waiting for utf-64 and for unicode to go back to fixed encoding and forgetting about merging code points into complex characters. Just keep a zeptillion code points for absolutely everything.