Joe Sam Shirah wrote:
You should be able to take the Unicode string character by character
and put it into a char variable, which is 16 bits. Check the numeric
value of the char. If GT 127, then it's a double byte.
Yeah, this I know.
I think my question is kind of imprecise.
What I'm trying to calculate is: How much of a Unicode string can I fit
in a double byte field.
For example ... if I have a Unicode string that contains a mix of
Japanese and English, alternating back and forth in parts ... is there a
way I can determine how much of that string I can fit in a fixed length
double byte database field. Since the double byte version of the value
will contain one or more sets of shift out/shift in characters, the
effective length gets reduced
I also wonder how helpful persisting only half a string will be.
This is transitory summary data ... strictly for human consumption. It
will be displayed in 5250 subfile. If the value is too long, it's
accepted that the data will be truncated.
You do know that the AS/400 now supports UTF-8, right? Of course,
that may not be an option; don't know your app. HTH,
Sadly it's not an option ... that would have solved a boat load of
problems from the getgo.
I also think that, unless you've got things set up really funny, you
should not get an "sql statement overflowing the field." I'd expect a data
truncation warning, which you can program to ignore, sort of getting
The odd thing about that is ... if I try to put 120 unicode bytes into a
120 byte double byte field, the trailing shift in is lost. At least it
was when I last tried this (more than a year ago).
IBM System i - For when you can't afford to be out of business
This is the Java Programming on and around the iSeries / AS400 (JAVA400-L) mailing list
To post a message email: JAVA400-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
or email: JAVA400-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives