SQL_COLUMN_PRECISION and SQL_COLUMN_DISPLAY_SIZE with timestamp values



Hello,

i have a great problem with my odbc library connected to an oracle
instantclient odbc driver (connected to a oracle 10.2 XE database).

The SQLColAttribute function returns for timestamp(6) datatypes 26 Byte
as DISPLAY_SIZE and PRECISION. (20byte + 6byte precision as described in
msdn)

But if i debug the SQLFetch call, i can see that the driver writes
beyond the allocated buffer:

l_stRow[<column>][<byte>]

....
l_stRow[1][25] 51 '3' //last byte from timestamp value
l_stRow[1][26] 0 '' //looks like Zero-Termination of SQLFetch?
l_stRow[1][27] 48 '0' //additional '0' from driver?
l_stRow[1][28] 48 '0' //additional '0' from driver?
l_stRow[1][29] 0 '' //Zero-Termination from driver?


anybody has seen something similar? Is this a bug in oracle odbc driver?
Are there any workarounds (except to allocate size * 2 as quick and
dirty solution)

--
Markus Schulz
.