Thursday, February 9, 2017

Error Codes with explanation - 224 ,2571 ,2631 ,2644 ,2646 ,2673 ,3055 ,3523 ,3524 ,3668 ,3812 ,5602 ,6706 ,7535 ,7539 ,8024 ,8239 ,10065


============================================================
Error Code - 224
============================================================

How To TeraData: CLI error: MTDP: EM_NOHOST(224): name not in HOSTS file or names database.

CLI error: MTDP: EM_NOHOST(224): name not in HOSTS file or names database.

Solution:-

Edit the file C:/WINDOWS/SYSTEM32/DRIVERS/ETC/HOSTS
Edit the following line as:-

127.0.0.1 localhost dbccop1

Close Bteq/Bteqwin and retry.

=====================================================
** How will you know it should be dbccop1 ?
=====================================================

In the tdpid,after cop it should be 1 as you are using on a local machine and there is only one RDBMS and before cop it should be dbc as you can verify your i_dbcpath variable value from C:\Program Files\Teradata\Client\13.0\CLIv2\clispb.dat.


===========================================================
Error Code 2571
how to release the table locked by Mload in Teradata
===========================================================

mload lock occurs when the mload statements continously locks the table for writing something without getting committed. In this case, the database administrator duty is to release the locked table. After releasing the mload lock, Care should be taken to drop the work tables, Log tables and error tables created during mloads. We can release the mload lock using the below sql statement

SQL to release Mload lock

RELEASE MLOAD <databasename>.<tablename>;

--OR

RELEASE MLOAD <databasename>.<tablename> IN APPLY;

Teradata always refers the tablename with the two part name. You can replace the databasename and tablename with your tablename and databasename.

Teradata Error 2571

when you try to release the MLoad lock you may get the error 2571, This is because either the table is not present or you do not have privileges to release mload lock


===============================================================
Error Code - 2631
2631, 3111 Errors and the Setting of the Deadlock Detection Timer
===============================================================

Tech Note:

The 2631 error normally is returned when a real DBS deadlock is detected but can also be returned when a transaction has been aborted for some other reason. The rationale for using the single 2631 error code instead of multiple error codes - a separate one for each situation - is that they all require the same user action, i.e. retry the transaction which will most likely run fine.

There are four things which can cause a 2631 error:

1) A true locking deadlock.
2) A very congested system.
3) The number of transactions exceeding the machine capacity, e.g. the lock table is full.
4) A database software bug.

We will analyze the above
case #1 and #2 in detail and very briefly talk about case #3.
We will only enter case #4 whenever there is a specific problem found.

Case #1

When there is a real deadlock detected in the system, a 2631 error is returned to the application. Please refer to TTIP 141 (a link is provided under Additional Information/Comments section of this article) for an understanding of the various specific deadlock situations. There is a deadlock time interval setting in the DBS GDO which can be set through the DBSControl utility. The IFP/dispatcher uses this time interval to collect the deadlock information from AMPs. The valid range of this timer is 0 to 3600 seconds and the default value is 240 seconds (4 minutes). Due to the faster CPUs, the system overhead for doing the deadlock checks is significantly reduced. The 240 second default dates back many years and reflects hardware performance that is not typical today. Most sites, therefore, should consider selecting a shorter time interval to i) break the deadlock faster, ii) get the response back to the user sooner and iii) release both the victim transaction and the other blocked transactions which can then continue to complete. In particular, if the workload contains any amount of update operations or utility operations such as Mload or Fastload that are performed concurrently, this time interval should be set to a shorter one.

For an environment which has few database deadlocks, changing to a shorter timer interval has no benefit.

In summary, the optimal time interval (deadlock detection timer) may be between 30 to 45 seconds but can be as low as 15 or even 10 seconds in an environment which has many true deadlocks.

Case #2

From the initial release of Version 2, until DR 67117 was implemented (see below), this case was to be diagnosed as follows: when a 2631 error code is returned to the application/tools/users and there is a 3111 error at the same time in the stream error log file, the system may either be very congested or has some hardware problem, but it is not a deadlock problem. In the DBS software, there is another hard coded transaction hang timer (4 minutes). The IFP/dispatcher uses this time interval to collect the transaction status (working, blocked, idle) from AMPs. If a dispatcher task finds that a particular transaction which is supposed to be in progress on the AMPs but does not get any CPU time on any AMP for some substantial amount of time (at least 8 minutes in round figures) and the transaction is not blocked by a lock on any AMP, an error code 2631/3111 will occur. There may be nothing wrong with the transaction request. It usually means that a transaction is starved for the CPU processing time and has been aborted. If this is the case, simply retry the request. It may work fine if there are no hardware problems.

With DR 67117, the 3111 logic was altered to not abort a transaction if it is waiting for AWTs (AMP Worker Tasks). This can happen in a congested system where all the AWTs are already busy, meaning that none can be assigned to do the work of this transaction. With this DR, the philosophy was changed to taking the position that there is nothing wrong with such a transaction, so it is not to be aborted. If the condition of a transaction not getting CPU time persists, it may be that there is at least one transaction running which is "bad" in the sense that it is not releasing AWTs as quickly as it might. Data skewing can cause this. It may also be that there are simply more requests than the system can handle. These cases require manual investigation. There may also be some software bugs or hardware problems. Again, manual investigation is required.

DR 67117 was implemented at versions TDBMS_4.1.3.53, TDBMS_5.0.0.15, and TDBMS_5.1.0.0. DRs 82484 and 84107 are refinements to DR 67117, changing how message queues are examined for the presence of messages. They were shipped with TDBMS_5.1.0.0 and TDBMS_6.0.0.0.

DR 96783 recognized a situation that can occur on large and busy systems. There are requests that call for AWTs to become active on mulitple AMPs to service a transaction. While these are delivered simultaneously, the PDE layer can reorder their delivery. Imagine a system that is so busy that every AMP has only one AWT available for new work. Two transactions generate requests for AWTs on every AMP at the same time. If all the AMPs get request one first, it runs and request two waits. If some AMPs get request one first and others get request two first, then neither transaction gets the full set of AWTs it requires and both must wait. Normally the situation clears as older work is finished. But sometimes it does not clear. Now DR 67117 works against us, as a transaction waiting for AWTs will not time out. DR 96783 allows for DR 67117 / 82484 / 84107 to be nullified on those systems where this is desireable, by the setting of the old DBSControl flag DisableMsgWaitBlock. By default, DisableMsgWaitBlock is FALSE. This enables both DR 82484 and DR 84107 and transactions waiting for AWTs will not time out. If DisableMsgWaitBlock is TRUE, then the global deadlock detector (acmgdead) will treat those transactions as idle and mark them to abort with 3111 error.

DO NOT change DisableMsgWaitBlock without consultation with the GSC. As should be obvious, this situation is extremely complex and one fix does not suit all.

Case #3

On a given AMP if there are already 30 AMP threads waiting for locks to be granted, and the 31st task requests a lock, the following error message will be issued:

     "2971: The AMP Lock table has overflowed".

The request fails and must then be re-submitted.

TTIP 141: Living Document on Locks and Deadlocks

Within the code, the constant SYSTRANTIMEOUT is set to the transaction or Dispatcher timeout value. This value is set to 240 seconds or 4-minutes. This value is not configurable.


===============================================================
Error Code - 2644
Error 2644 No more room in database
===============================================================

No more room in database

Error :

Error 2644 No more room in database. This error rarely occurs when teradata database has not have enough space to execute the DDL or DML commands

Solution :

To resolve this error the Teradata DBA has to check the space available in the Database, He may need to increase the space to get rid of this error. Space represents three space avilable for executing a command. Let us see the three spaces in detail

Permanent Space:

Permanent space is a space where the teradata objects resides, objects such as Database, users, Tables, Stored procedures, macros etc. This space is shortly called as PERM space. This space is commonly distributed across all AMPs. Take for example, You have 400 Gb of PERM space in your teradata, and if the space limitation is reached. you will get this Error "No more room in database". You have to contact your Teradata DBA to increase the PERM space.

SPOOL Space

Whatever query executed in Teradata databases will be stored in this SPOOL Space. Teradata? will keep the conditional query result in this Spool space for further utilization. If you do not have enough spool space in teredata, then you will get the Error "No more room in database"

TEMP Space

This space will be allocated for the Global Temporary tables available or created in the Teradata database. This space will be maintained to the user until the session end or closed. PERM space is not slloted for these Temporar Tables. If your Procedure or Your SQL Query processes the Temporary tables that are of more spaces that exceeds the TEMP space you would probably get this error "No more room in database"

Conclusion

If you have enough spaces in these three SPACEs, then you would not get this error. Else you will get the above listed error. DBAs has to increase and maintain these spaces to avoid this error


==========================================================
Error Code : 2646
Error : 2646 No more Spool Space" . In such cases try using GROUP BY.
==========================================================

GROUP BY

1. It reads all the rows part of GROUP BY
2. It will remove all duplicates in each AMP for given set of values using "BUCKETS" concept
3. Hashes the unique values on each AMP
4. Then it will re-distribute them to particular /appropriate AMP's
5. Once redistribution is completed , it
a. Sorts data to group duplicates on each AMP
b. Will remove all the duplicates on each amp and sends the original/unique value

Hence it is better to go for

GROUP BY - when Many duplicates
•DISTINCT - when few or no duplicates
•GROUP BY - SPOOL space is exceeded

Diagnostic Procedure: 2646 - no more spool space in xxxx
    
Procedure:

Running out of spool space is a different class of problem than running out of space in general (error 2644). The 2646 error is commonly called "blowing spool". This is NOT an indication that there's not enough spool space available on the disks. What it means is that a particular user (not session) has exceeded its spool limit on at least one AMP.

Note two important aspects of this last point. Spool space is allocated to a user, not a session. If multiple people sign on using the same User Id, the last request may run out of spool space because of an excessive request of another session using the same User Id. The second point is that, should a user exhaust spool on a single AMP, they will encounter this error. This is often a more frequent issue caused by an extremely skewed request.

The first thing to check is what the user's spool limit is:

SELECT SUM(MaxSpool)
FROM DBC.DiskSpace
WHERE DatabaseName = '<user-name>';

Where: <username> is the xxxx in the error message.

As a quick fix, you might want to simply increase the spool limit:

MODIFY USER AS SPOOL = xxxxx BYTES;

If this is not feasible or does not resolve the problem, you should check to see if the request is extremely skewed with the following query:

SELECT  DataBaseName (title 'UserName', format 'x(20)')
       ,SUM(PeakSpool) (title 'Total Spool',
            FORMAT 'zzz,zzz,zzz,zz9')
       ,AVG(PeakSpool) (title 'AMP//Average',
            FORMAT 'zzz,zzz,zz9')
       ,MIN(PeakSpool) (title 'AMP//Minimum',
            FORMAT 'zzz,zzz,zz9')
       ,MAX(PeakSpool) (title 'AMP//Maximum',
            FORMAT 'zzz,zzz,zzz,zz9')
       ,(100 - (AVG(PeakSpool)/nullifzero(MAX(PeakSpool)*100)))
           (format 'zzz,zzz,999', title 'SkewFactor')
FROM Dbc.DiskSpace
GROUP BY 1 ORDER BY 1
WHERE DataBaseName = 'DBC';

If the "SkewFactor" is several thousand or more, try to identify the responsible query. If the volume of data associated with the underlying tables has recently changed, this may be an indication of "stale statistics". You may want to recollect statistics to eliminate this possibility. If this doesn't do the trick, the Teradata Performance Management Manual provide detailed information on how to analyze the execution plan for a given query.

If you believe that the execution plan is in error, the GSC will require TSET (Teradata System Emulation Tool) data to pursue analysis.


===========================================================
Error Code -2673
Failure 2673 The source parcel length does not match data that was defined
===========================================================

If you are using VARTEXT format for the input file, all the fields in the USING clause
should be declared as VARCHAR, with the max allowable length. You may need to use explicit
FORMAT, CAST, etc. to control how Teradata will convert the strings to the appropriate data
types, if the default isn't suitable.

But as written, your character field would be loaded with an empty string and padded
with spaces. You can translate empty string:

CASE WHEN :String = '' THEN NULL ELSE :String END

Or if an empty string/spaces is a valid value then use COALESCE or CASE on the EXPORT

to set the string to some special "flag" value (like '?').

I am receiving below error when trying to run script.

*** Growing Buffer to 12337
*** Error: Import data size does not agree with byte length.
The cause may be:
1) IMPORT DATA vs. IMPORT REPORT
2) incorrect incoming data
3) import file has reached end-of-file.
*** Warning: Out of data.

Desc :

Your input file is not in "fastload format" so you can't use IMPORT  DATA, and your USING clause needs to describe the input datatype (which appears to be CHAR) rather than the target table datatype.

Best bet may be to use:
IMPORT VARTEXT ... USING emp (VARCHAR(4)) ...

Another possible option:
IMPORT REPORT ... USING emp (CHAR(4)) ...

According to manual
-------------------
If DATA or INDICDATA is specified, BTEQ expects the incoming data to be
in FastLoad format, such as that produced by a BTEQ EXPORT DATA or BTEQ
EXPORT INDICDATA command, or by FastExport.

Use vartext with some delimiter see sample

*** BTEQ script start ***
.LOGON UID,PWD;
database DB;
.SET ECHOREQ off
.defaults
.set errorout stderr;
.import vartext ',' file = Test.dat
.quiet on
.repeat*
using emp (VarChar(4))
insert into dev_edw.test_import (emp) values (:emp);
quit;
*** BTEQ script end ***

*** Data file start ***
1001
1002
1003
1004
1005
*** Data file end ***

or

*** Data file start ***
1001,Teradata
1002,DB2
1003,Oracle
1004,SQL server
1005,Sybase
*** Data file end ***


=======================================================
Error Code - 3055
3055 Logons are only enabled for user DBC
=======================================================

The error was reported was

"3055:Logons are only enabled for user DBC".

This normally happens after some maintenance work or after up-gradation/migration of database or may be database administrator intentionally put restriction.

To sort this issue out, below steps can be followed:

1. Login to the node
2. type command "cnsterm 6" and press enter
3. in supervisor command prompt, type "enable all logons" and press enter.
       
         Input Supervisor Command:
       
         > enable all logons
         enable all logons

         12/10/26 03:17:47 Logons enabled.

         Enable Logons of screen debug in ctl have been updated to All.


=================================================================
Error Code - 3523
Error 3523: An owner referenced by user does not have SELECT WITH GRANT OPTION
==================================================================

When a user try to select record from the view this error would normally occur. The reason is that particular user has select access to the view but the view database has no select access to the table database. While granting select access to the user for the view, we need to grant select access to table for the view database with grant option.

Take for example, we have a database called CUSTOMER_TBLS that has customer Tables and we have another database called CUSTOMER_TBLSV which has the views. Now we are granting access to a user 'TDUSER' to a view database 'CUSTOMER_TBLSV'. This time user TDUSER has access to CUSTOMER_TBLSV and CUSTOMER_TBLSV does not have select access to CUSTOMER_TBLS.

Access script should be in the same way that i have described below;

GRANT SELECT ON CUSTOMER_TBLS TO CUSTOMER_TBLSV WITH GRANT OPTION;

GRANT SELECT ON CUSTOMER_TBLSV TO TDUSER;

If we give the access permission in this way we can get rid of the error message "An owner referenced by user does not have SELECT WITH GRANT OPTION".

User must have Granted access to Grant the permission to the database objects in Teradata. Always check for access to the referenced objects and then grant access to the direct objects. The scenario would be applied to stored procedures also. if the stored procedures are in one database and the table and view are in another then we have to first grant access to stored procedure database to access view and table database, then finally we can grant access to user for the stored procedure database.


==================================================================
Error Code - 3524
Failure 3524 An owner referenced by user does not have CREATE TABLE WITH GRANT
OPTION access to database
==================================================================

To grant access to other users to CREATE Table on a Teradata Database, The concerned DBA must have the CREATE Table permission with GRANT option. Grant Option is required to Grant permission to other users.

In Teradata, If you are granting any access to users, you must have the Grant option for the access you have, Using that you can Grant access to other developers or users in Teradata system.

To Grant access to CREATE Table with Grant option, Please use the below Teradata SQL.

GRANT CREATE TABLE ON dbname TO dbuser WITH GRANT OPTION;



==================================================================
Error code - 3668
3668: A table listed in the Dictionary is not present
==================================================================

This error shows up when restore of table is failed in middle, and user tries to select from this table.

LOCKING Failed.

3668: A table listed in the Dictionary is not present.

Solution would be to drop the table and recreate/restore/copy with latest backup.


===============================================================
Error Code -3812
Failure 3812: The positional assignment list has too few values
===============================================================

The positional assignment list has too few values

Teradata Error :

Failure 3812: The positional assignment list has too few values

Solution :

Reason for this error is mainly because of not supplying a value for any one of the Parameter in the Teradata Stored Procedure. For Example, there is a Procedure which has two parameter; one is input parameter and the another one is output parameter. If you fail to pass the value for the output parameter you would get this error message in Teradata. Let us see this in detail with a simple Stored Procedure.

CREATE PROCEDURE TEST.GETNAME
(
IN ID INTEGER
, OUT NAME VARCHAR(100)
)
BEGIN
INSERT INTO TEST.STOREID(SID)
VAUES(ID);
SELECT TOP 1 SName INTO Name
FROM TEST.STORENAME;
END


We can call this stored procedure with the following Teradata SQL

CALL TEST.GETNAME(1);

When you call the Procedure using this command you would get the error with the message "The positional assignment list has too few values". Output parameter has no value in the Procedural Call. It can be corrected as;

CALL TEST.GETNAME(1, NAME);
- See more at: http://dotnetbites.com/the-positional-assignment-list-has-too-few-values#sthash.3ItDXpwg.dpuf


==========================================================
Error Code - 5602
Teradata code 5602 Error creating UDF/XSP/UDM/UDT/JAR: problem accessing
 the external file %VSTR .
===========================================================

Error Description :

Explanation: During the creation of a UDF/XSP/UDM/UDT/JAR, a problem was found in accessing the external files.

Generated By: CUFTsk

For Whom: UDF/UDT/XSP developer

Remedy: Check the files exist in the directory specified. Check the read permissions of the files. Check the spelling of the supplied directory and external file names.


========================================================
Error Code - 6706
Failed. 6706: The string contains an untranslatable character
========================================================

Problem Description: SELECT Failed. 6706: The string contains an untranslatable character

This error usually comes when a junk character come across when selecting from column of a table using some function like cast (),

coalesce(),trim() etc.

Example :

Select Prop_Name,cast(coalesce(Prop_DSC,'') as char(400) ) from P_PROPERTY. PROPTYPE ;

Problem seems to be with data in column "Prop_DSC" in P_PROPERTY. PROPTYPE  table. column character set is LATIN.

Problem Solution:  Please use translate_chk function to determine untranslatable column values that are causing this issue

You can use the proper “source_TO_target”  value for the translation. e.g. LATIN_TO_UNICODE

please check “show table” to verify any character set is specified for column in table definition and choose character set translation string accordingly e.g. LATIN_TO_UNICODE, UNICODE _TO_ LATIN etc .

SELECT Prop_DSC  FROM P_ PROPERTY.PROPTYPE  WHERE TRANSLATE_CHK(Prop_DSC USING LATIN_TO_UNICODE) <> 0;


====================================================
Error - 7535
7535: Operation not allowed: table header has invalid partitioning
====================================================

This can occur due to a restore/copy of the table to another system with different hardware or operating system. 

In this case, the partitioning expressions need to be regenerated and updated in the table headers of the table. 

Example:

SELECT Failed. 7535: Operation not allowed: MTD_DB.MTD_MSTR table header has invalid partitioning. Solution to this is validating table header with primary index of table.

For that we can ALTER the TABLE with the REVALIDATE PRIMARY INDEX option.

here we go... command is ..

alter table Mtd_tb.mtd_mstr revalidate primary index;


====================================================================
Error Code - 7539
Failed [7593 : HY000] Error limit of 10 is reached; errors logged in ET_SQL_ERR_LOG_01ga where ETC_dbql_qid = 163833313195459950.
=====================================================================

Go to informatica tool

Go to workload manager menu

In that go to session

In that session option we choose 3 rd option :  like stop and error will be there ?

Here u can increase the limit.

u can increase the value under stop and error option

like 100


============================================================
Error  Code - 8024
Error 8024 All virtual circuits are currently in use.
=============================================================

Remeady : 

Error 8024 is from the Teradata Database Gateway. It means that you have reached the session limit 
of the Teradata Database. By default, a Teradata Database supports 120 sessions per PE.

Explanation: 

This error is sent in response to an assign request when a either the gateway capacity  or the capacity of all PEs configured with the same logical host id has been reached.

Generated By: Gateway

For Whom: End User

Remedy: Wait some amount of time and retry the assign request.  The assign will succeed when another session logs off.

Second way :

(1)  Is it possible to add a new PDE to any demo Windows 2000 version of Teradata Demo.  

No. The demo version is limited to IIRC 10 sessions, but you can increase that limit to the maximum of a single PE: 120. 

(2) How can I clear hanging circuits.  

Demo version Help: Decreasing Gateway Timeout.

(3) Demo version seem to have capacity to store 1Gig worth of data, how can I increase this?  

No way, it's a demo.

Third Way :

You could clean up the hung sessions by aborting all the sessions in either of the methods mentioned below.

1) From the TeraData prompt, start cnstool & from the supervisor window issue the command 

"abort session *.* logoff"   (This kills all sessions to the database.)
 
(You could alternately issue the same command from DataBase window (xdbw) --> supervisor window).

2) Again from the TeraData prompt, start GtwGlobal, set the host to the number of your host (most likely 1)

se ho 1

You could kill the sessions either by quoting the session number or user whichever is convenient

ki us
ki se

The first method instantaneously kills the sessions & the second might have some latency.
To increase the number of sessions, we need to add additional PE's (Parsing Engine).
 (For more information on adding PE's, refer the manual on config/reconfig utilities)

Teradata Allows 15 Load/Export Jobs maximum as far as I know, I faced the same problem and then I fixed it by  setting the variable of MaxLoadTasks.

MaxLoadTasks - Maximum combined number of FastLoad, MultiLoad, and FastExport tasks. 

The valid range is 0 .. 15. The default is 5.

All Instructions are valid for Win2000
from Teradata Database->Database Window issue command:

Start DBSCONTROL

this will open another window then issue commands below:

MODIFY General 11= 
WRITE

This will set the MaxLoadTasks to the no. you gave


========================================================
Error Code - 8239
Restore/Copy from TD12 to TD13.10.4 / 14.0.1 / 14.10 fails with error 8239 
when archive contains UDT
========================================================

Condition/Symptom:
   
For tables that contain User Defined Types (UDTs); restore/copy from a TD12 archive to a TD 13.10.4 / 14.0.1 / 14.10 system may fail with error 8239 although the UDT exists on the target system.

Restore/Copy detected invalid UDT condition: <UDT_name> does not exist.

The problem is due changes in the internal structure of UDT name between TD12 and TD13.10/TD14 and during the restore/copy operation, the copied UDT name fails comparison against the existing type name.

Sample case below illustrates this condition:
CREATE TYPE type1 AS DECIMAL(10,2) FINAL;
CREATE TABLE t1
       (a1  INT
       ,a2  type1);  <-- /* table contains UDT column */
INSERT INTO t1 (1 ,1.0);

/* Archive from TD12  */

08/27/2012 03:49:28  DBS LANGUAGE Standard
08/27/2012 03:49:28  DBS RELEASE 12.00.03.27
08/27/2012 03:49:28  DBS VERSION 12.00.03.27
08/27/2012 03:49:28  
08/27/2012 03:49:28  STATEMENT COMPLETED
08/27/2012 03:49:28 
08/27/2012 03:49:28  archive data tables(db1), release lock, file = db1;
08/27/2012 03:49:30  UTILITY EVENT NUMBER  - 1594
08/27/2012 03:49:31  LOGGED ON    4 SESSIONS
08/27/2012 03:49:31  
08/27/2012 03:49:31  ARCHIVING DATABASE "db1"
08/27/2012 03:49:44  TABLE "t1" - 488 BYTES, 1 ROWS ARCHIVED
08/27/2012 03:49:44  "db1" - LOCK RELEASED
08/27/2012 03:49:44  DUMP COMPLETED
08/27/2012 03:49:44  STATEMENT COMPLETED

/* UDT 'type1' is already installed on TD13 target system */

help database sysudtlib;

 *** Help information returned. 154 rows.
 *** Total elapsed time was 1 second.

Table/View/Macro name          Kind Comment
------------------------------ ---- ---------------------------------------
<..>
type1                          U    ?


/* Copy to TD13.10 from TD12 archive failed */

08/27/2012 04:51:44  DBS LANGUAGE SUPPORT MODE Standard
08/27/2012 04:51:44  DBS RELEASE 13.10.04.08
08/27/2012 04:51:44  DBS VERSION 13.10.04.04
08/27/2012 04:51:44  
08/27/2012 04:51:44  STATEMENT COMPLETED
08/27/2012 04:51:44 
08/27/2012 04:51:44  copy data tables(db1.t1), release lock, file =ARCHIVE;
08/27/2012 04:51:44  UTILITY EVENT NUMBER  - 282
08/27/2012 04:51:45  LOGGED ON    4 SESSIONS
08/27/2012 04:51:46  "db1"."t1" CREATED
08/27/2012 04:51:47  STARTING TO COPY TABLE "db1"."t1"
08/27/2012 04:51:48  DICTIONARY COPY COMPLETED
08/27/2012 04:51:48  *** Failure 8239:Restore/Copy detected invalid UDT
                     condition: TYPE1 does not exist.
08/27/2012 04:51:48  LOGGED OFF   7 SESSIONS
08/27/2012 04:51:48  ARCMAIN TERMINATED WITH SEVERITY 12

Probable Cause:
Archive and UDT

Workaround:
In pre-migration follow the below workaround:

Exclude the tables with UDTs from the archive operation.The following query will identify any user tables with UDTs:
 
.width 256
select  (trim(c.databasename) || '.' || trim(c.tablename)) (title 'Tables With UDTs')
       ,trim(c.columnName) (title 'Column Name')
       ,trim(c.columnUDTName) (title 'UDT Name')
from    dbc.columns c inner join dbc.udtinfo u
   on   c.columnUDTName = u.typeName
where   c.columnUDTName is not null
  and   databasename not in ('sysudtlib')
order by databasename ,tablename ,columnUDTName;

To migrate the tables to a TD13 / TD14 system that does not have the fix, a manual approach can be taken.
1) Create the table on the target system to match the source
2) Export data to file from source system (example: fastexport)
3) Import data from file to target system (example: mload)


=============================================================
Error Code - 10065
10065 WSA E HostUnreach The Teradata server can't currently be reached over this 
network
==============================================================

Error Message:

10065 WSA E HostUnreach: The Teradata server can't currently be reached over this network

Solution :

I actually got frustrated when i tried to connect to teradata through teradata sql assistant. Later i found that the teradata service is not running. We would get this error when the service is stopped by any source. We can start the Teradata Service to fix this error. While trying to start the Teradata service if you are getting any error please check the event log for details. 

This event log will be available in the windows through control panel option. If it's not the windows operating system, then you can contact any one of your friend or colleague who has well exposure in that operating system to assist you. 

Teredata service can be likely to stop with the following reason;

When the System is down we cannot expect the Teradata service is still running.
When there is any disk crash, this service will be stopped.
When any disaster occurs.
Also make sure your windows firewall is accepting the port through which it connects to teradata server.

Teradata Sql Assistant - issue

I have faced one or two times the same issue in Sql Assistant. I just reinstalled the Teradata Sql Assistant and escaped from this error.

No comments:

Post a Comment