On Fri, May 17, 2019 at 2:41 AM Scott Klement <rpg400-l@xxxxxxxxxxxxxxxx> wrote:
Where many people get this wrong is that they encode ASCII or
Unicode data
Scott, respectfully, referring to "Unicode" (or "UNICODE" as you later
do) as an *encoding* is just as misleading as what Mihael said.
There has already been too much damage done by several companies
conflating the term "Unicode" with a specific encoding (most commonly
UTF-16LE) or set of encodings. If you understand what Unicode is, then
please try to be a bit more careful and precise when talking about it.
For those who don't know, Unicode is simply a catalog of every known
character of every known language, including made-up ones, and
including emojis. These characters exist as conceptual,
human-understandable characters, and as such have NO ENCODING.
Obviously, there has to be a way to look up each character, so as a
matter of necessity, each character has a number. But this number is
NOT the same thing as an encoding. For example, the number 5 is
understandable to humans as just a number, which happens to be the
same number as how many fingers most of us have on each hand. How do
we represent this on a computer? Well, as IBM i programmers, we should
be especially appreciative of the fact that there are several ways. It
could be zoned, packed, binary integer, float, and perhaps others. Not
only do each of these have different bit patterns, we can define each
of them to have various bit lengths.
ASCII, EBCDIC, UTF-8, and UTF-16 are all examples of encodings, in the
same way that zoned, packed, binary integer, or float are encodings.
Unicode is NOT such an encoding.
The term "Unicode data" doesn't really mean much. It's basically the
same thing as saying "character data". If there is such a thing as
Unicode data, then it is just as encodable into EBCDIC as it is
encodable into ASCII.
John Y.
As an Amazon Associate we earn from qualifying purchases.