Re: sendStringParameterAsUnicode: How to insert unicode data corre



hi sriram,

we are also facing the same problem, when we are trying to insert Korean
chararter into SQL 2005 data base. Did you get any solution for this. If yes
kindly share any thing u have.

thanks


"Joe Weinstein" wrote:



Sriram wrote:

Thanks for the explanation. I am having trouble with unicode data in SQL
Server 2005.

We are using SAP XI and connecting it to SQL Server 2005 using JDBC. Unicode
data is received into SQL server as “????”.

The destination columns are nvarchar and store unicode data correctly.
External updates from Excel/text files are being processed correctly.

Additionally. the information is stored and dispalyed in SAP XI correctly.

Can you let me know what other settings are to be checked and corrected.

Regards,
Sriram.


I would say that whatever you are using to display the data that is
showing "?????" is unable to display the characters you have. The
data itself may well be exactly correct as inserted and retrieved.
This may just be that the JVM you are using does not have it's
charset/locale defined to print your characters, so whatever the
character is, if it's not known, the JVM prints '?'.




"Joe Weinstein" wrote:



sm wrote:


When inserting unicode data, we would like to programmatically override the
setting and send the data encoded in unicode(UTF-16), instead of defaulting
the whole app to unicode=true and take a performance hit.


Configuration: MS SQL server 2005 SP2, and MS jdbc driver version: 1.1
The sendStringParameterAsUnicode has been set to false for performance
reasons.
Any suggestions? We have tried the cast(? as nvarchar) function, but that
did not help.

Sample code/output:
String text = "\u0143\u0144";
sendStringParametersAsUnicode=false
insert into unitable (_ntext) values (?)
Inserting into databse:
143 144 (printed hex values)
Read from database:
3f 3f (printed hex values)

Thanks,
sm.

Hi. Your choices are only two:

1 - the default (sendStringParameterAsUnicode=true). This
means the driver will send Java strings as 16-bit characters.
This is the default because Java characters are 16-bit, and
no driver would presume to automatically mutate your data.

2 - sendStringParameterAsUnicode=false. This means the driver
will send Java strings as 8-bit characters, *silently truncating
any high-byte content before sending*. That's why you saw what
you saw.

You want/need the first if your strings have chars with any
non-zero high-order bytes.

There is no performance issue to speak of in the *sending*
of data. The main issue is whether the DBMS will use the
table indexes when searching with the sent data as search
criteria. If data is sent as 16-bit, it will use any NVARCHAR
indexes, but not VARCHAR indexes. If it is sent as 8-bit, it
will use VARCHAR indexes, but not NVARCHAR.
So as long as you make sure you send the data as you need,
you're OK. If you send as 16-bit to compare to VARCHAR columns,
the DBMS will skip indexes and do table scans. That is where
the pain comes in.

Joe Weinstein at BEA Systems




.