Varchar expanding to its maximum length in fexp - ALSO USING THIS FIELD IN ORDER BY - response (4) by Fred
Varchar expanding to its maximum length in fexp - ALSO USING THIS FIELD IN ORDER BY - response (5) by ABHITD
Thanks FRED :)
Varchar expanding to its maximum length in fexp - ALSO USING THIS FIELD IN ORDER BY - response (6) by dnoeth
A string is always expanded to the defined size, SUBSTRING will not change that.
You wrote "the actual data saved in this description field < 255 bytes", you can simply CAST within the ORDER BY:
select description from TABLE_A ORDER BY cast (description as varchar(255))
Do you actually need a perfectly sorted result?
A human being wil not check if it's still sorted correctly after the nth character, so cast (description as varchar(50)) might be ok.
Otherwise this should work: run the MAX(CHAR_LENGTH) before, export the length to a file. Then .ACCEPT len in the FExp script and dynamically use cast (description as varchar(&len))
Varchar expanding to its maximum length in fexp - ALSO USING THIS FIELD IN ORDER BY - response (7) by Fred
Thanks for the correction! I should have remembered that SUBSTRING / SUBSTR result has the same size as the original field.
You are correct, of course. CAST is required and you can't use a scalar subquery in the type description, so it would need to be a script variable.
Varchar expanding to its maximum length in fexp - ALSO USING THIS FIELD IN ORDER BY - response (11) by ABHITD
Thanks a lot DNOETH & FRED :)
But if we will use CAST alone and if description exceeds 255 chrachters then it will give an error like : Right truncation of string data
so we need to use as below in order by :
CAST (SUBSTR(DESCRIPTION,1,255) AS VARCHAR (255))
Regards
ABHITD
Varchar expanding to its maximum length in fexp - ALSO USING THIS FIELD IN ORDER BY - response (12) by dnoeth
Oops, you're running ANSI-mode sessions.
Of course, then you need CAST(SUBSTRING).
SQL Assistant - # of rows updated - forum topic by gopal_rama
When I update records using SQL Asst in a iSeries file , the # of records updated is neither showing in the task bar nor in History.
Same is the case with deletes and inserts. Please help as to how to make this work. Thanks.
*** Failure 6760 Invalid timestamp. in bteq import - forum topic by n@new1
Hi All,
While importing data from INDICDATA file I m getting following error :
*** Growing Buffer to 4757
*** Failure 6760 Invalid timestamp.
Statement# 1, Info =0
Export Script :
.EXPORT INDICDATA FILE='data.txt'
.decimaldigits 38
.SET WIDTH 9000
.SET SESSION CHARSET "UTF8"
SELECT CAST(Col1 AS CHAR(20)),
CAST(Col2 AS CHAR(20)),
CAST(Col3 AS CHAR(4)),
CAST(Col4 AS CHAR(100)),
CAST(Col5 AS CHAR(100)),
CAST(Col6 AS CHAR(150)),
CAST(Col7 AS CHAR(19)),
CAST(Col8 AS CHAR(100)),
CAST(Col9 AS CHAR(19)),
CAST(Col10 AS CHAR(100)),
CAST(Col11 AS DECIMAL(38,0)),
CAST(coalesce(Col12,CAST('2014-01-01 00:00:00' AS TIMESTAMP(0))) AS CHAR(19)),
CAST(Col13 AS CHAR(1000)),
CAST(Col14 AS INTEGER),
CAST(Col15 AS INTEGER),
CAST(Col16 AS CHAR(1)),
CAST(Col17 AS CHAR(19)),
CAST(Col18 AS CHAR(19))
FROM status;
.LOGOFF
.QUIT
=========****==============
Table Structure :
CREATE MULTISET TABLE Rc_Test ,NO FALLBACK ,
NO BEFORE JOURNAL,
NO AFTER JOURNAL,
CHECKSUM = DEFAULT,
DEFAULT MERGEBLOCKRATIO
(
Col1 VARCHAR(20) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL,
Col2 VARCHAR(20) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL,
Col3 CHAR(3) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL,
Col4 VARCHAR(100) CHARACTER SET UNICODE NOT CASESPECIFIC NOT NULL,
Col5 VARCHAR(100) CHARACTER SET UNICODE NOT CASESPECIFIC NOT NULL,
Col6 VARCHAR(150) CHARACTER SET UNICODE NOT CASESPECIFIC NOT NULL,
Col7 TIMESTAMP(0) ,
Col8 VARCHAR(100) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL,
Col9 TIMESTAMP(0) ,
Col10 VARCHAR(100) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL,
Col11 DECIMAL(38,0) NOT NULL,
Col12 TIMESTAMP(0),
Col13 VARCHAR(1000) CHARACTER SET UNICODE NOT CASESPECIFIC ,
Col14 INTEGER NOT NULL,
Col15 INTEGER NOT NULL,
Col16 CHAR(1) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL,
Col17 TIMESTAMP(0) NOT NULL,
Col18 TIMESTAMP(0) )
PRIMARY INDEX XNUPI_RC_STATUS ( Col1 ,Col2 ,
Col23 );
=========****==============
Import Script :
========
.IMPORT INDICDATA FILE='data.txt'
.decimaldigits 38
.SET SESSION CHARSET "UTF8"
.REPEAT *
USING Col1 (CHAR(20)),
Col2 (CHAR(20)),
Col3 (CHAR(4)),
Col4 (CHAR(100)),
Col5 (CHAR(100)),
Col6 (CHAR(150)),
Col7 (CHAR(19)),
Col8 (CHAR(100)),
Col9 (CHAR(19)),
Col10 (CHAR(100)),
Col11 (DECIMAL(38,0)),
Col12 (CHAR(19)),
Col13 (CHAR(1000)),
Col14 (INTEGER),
Col15 (INTEGER),
Col16 (CHAR(1)),
Col17 (CHAR(19)),
Col18 (CHAR(19))
INSERT INTO status_test
VALUES (TRIM(:Col1),
TRIM(:Col2),
TRIM(:Col3),
TRIM(:Col4),
TRIM(:Col5),
TRIM(:Col6),
CAST(:Col7 AS TIMESTAMP(0)),
TRIM(:Col8),
CAST(:Col9 AS TIMESTAMP(0)),
TRIM(:Col10),
:Col11,
CAST(:Col12 AS TIMESTAMP(0)),
TRIM(:Col13),
:Col14,
:Col15,
TRIM(:Col16),
CAST(:Col17 AS TIMESTAMP(0)),
CAST(:Col18 AS TIMESTAMP(0)));
.LOGOFF
.QUIT
Could anyone please help me with this.
Thanks
How to install Teradata on UBUNTU - response (3) by flrizzato
Ok but I'd like to install in a existing Linux VM (Ubutu 15). Where to find a good tutorial?
How to install Teradata on UBUNTU - response (4) by dnoeth
There's only a pre-installed VM running Teradata on Suse SLES 10/11.
How to install Teradata on UBUNTU - response (5) by flrizzato
Really weird... I already have a VM with other goods... used to demo our products... and I'll need to boot a whole new VM only for Teradata? Plus... other things doesn't work well on Suse... so trying to install other servers on your VM is a nightmare. Hope Teradata change this direction soon... just providing an express edition for technology validation as any other software producer.
How to install Teradata on UBUNTU - response (6) by dnoeth
Teradata runs on SLES only, no other Linux distro is officially supported. It's just starting another VM, what's so hard about it?
On a real Teradata you're not going to install "other servers", it's a dedicated database system and you're probably accessing it from another server anyway.
Linking/Joining Different Databases in Teradata - forum topic by smarti01
I am trying to link/join 2 different databases from 2 different servers in a query, but I can't seem to get it to work What is the trick?
Linking/Joining Different Databases in Teradata - response (1) by Fred
This would require TD15 plus QueryGrid Teradata-to-Teradata.
Linking/Joining Different Databases in Teradata - response (2) by m.ali
I have successfully create the Foreign Server between TD and TD but not able to select data: get error "Server object not associated with operator". then I tried to add IMPORT clause(syslib.load_from_td) and getting error function does not exists.
Here please note the verions information as well:
version:15.00.03.04 and same release.
Do we need to install something extra?
Linking/Joining Different Databases in Teradata - response (3) by Fred
Yes, the Foreign Server grammar is built-in; but QueryGrid table operators are not. Contact your Teradata sales team for details.
Linking/Joining Different Databases in Teradata - response (4) by m.ali
Thanks Fred for the input. Is there any specific reason that Teradata is not providing this TD to TD linked server by default. As in other DBMS like SQL server this feature helps to communicate between two different appliances. Which some time become tricky in Teradata.
Same ROW-Hash Value for all the record - forum topic by kumarvaibhav1992
Hi All,
When i tried finding the Row hash value for the unique column which have all unique values ,i saw that all row hash value matches for every single record present there.
http://www.teradatatech.com/?p=470 this link suggest that row hash values cannot be same for unique as well as non unique columns.
please clarify me on this question.
Same ROW-Hash Value for all the record - response (1) by Fred
The Teradata hash function is deterministic - same input values always give the same result. It's possible (but unusual) for different input values to give the same result, referred to as a "hash collision".
Same ROW-Hash Value for all the record - response (2) by kumarvaibhav1992
Thanx Fred for your time :)
You could do this:
select
cast(description as varchar(1000))
from
(
select
trim(trailing from substring(description from 1 for (select max(character_length(description)) from Table_A))) as description
from Table_A
) as A
order by description;
I'm not sure the cost of the singleton subquery plus the function calls will be worthwhile compared to the original query, though.