Sure the current doctrine of programming says its good, but lets take a step back. First of all its counter intuitive to writing English and it damages readability so you may have naming conflicts without realizing it or the compiler being able to warn you.
Also, in addition to remembering what a function is called you have to also remember its casing which different libraries may want to be phrased differently (like C libraries versus C++ libraries).
Bypassing the very real issues it may cause in the design of your software it also may lead to silly library cruft (see Java's Color class) - how many ways can you spell blue?
Case-sensitivity is, more or less, a default: you have different strings that have distinct encodings, so you treat them as different identifiers.
The alternative to case-sensitivity requires your compiler to know about case, and, more importantly, how to do case-folding. At that point, you can either choose to (a) restrict identifiers to a Some (probably ASCII) limited subset of characters, (b) only make some subset of acceptable characters (reliably) case-insensitive, (c) require every compiler to have tables for case-folding.
That's before we get into the locale-dependence of case-folding, which makes the letter "i" unreliable.
And you still have to distinguish Color and Colour.
It is actually possible to define all of those things in a compiler standard and force them to do it certain ways or document possible "implementation permissions", but, again unicode identifiers are rare in practice because programming is designed for English speakers like it or not.
Case insensitivity sounds good except it quickly runs afoul of "language isn't so simple."
If I define a variable as "groß", does "GROSS" or "GROẞ" match it (or both, which probably implies "gross" would match as well)? What about "ê" and "E"? Or the infamous i/I/İ/ı debacle, which could make matching "insane" to "INSANE" locale-dependent? How do you define case-insensitivity in a way that makes sense?
These are solved problems though and unicode identifiers are rare in practice...
See Normalization Form KC and Clause 21 of ISO/IEC 10646:2017.
"Normalization forms are the mechanisms allowing the selection of a unique coded representation among
alternative; but equivalent coded text representations of the same text. Normalization forms for use with
ISO/IEC 10646 are specified in the Unicode Standard UAX#15..." yada yada
Unicode normalization doesn't actually solve a single problem I mentioned. All of the listed characters are equal to themselves in both NFC and NFKC.
Also Unicode identifiers aren't rare in terms of language support. Most of the popular languages support them--C/C++, C#, Java, PHP, Python, Perl, Swift, Go, Rust, Ruby, JavaScript, even Ada. It's actually difficult to find a popular language that prohibits Unicode identifiers entirely (MATLAB does, not sure about Visual Basic).
Also, in addition to remembering what a function is called you have to also remember its casing which different libraries may want to be phrased differently (like C libraries versus C++ libraries).
Bypassing the very real issues it may cause in the design of your software it also may lead to silly library cruft (see Java's Color class) - how many ways can you spell blue?
https://docs.oracle.com/javase/7/docs/api/java/awt/Color.htm...