Thursday, April 6, 2017

Why DIP or DIPPATCH did not complete or hung?


Problem

Why DIP or DIPPATCH did not complete or hung?


Environment/Conditions/Configuration

TDBMS 14.10.07.03


Cause

Blocking / leftover HUT locks


Solution

Clear leftover HUT locks:
               1) Logon to database as DBC
               2) release lock DATABASE_ABC, override;


Additional Information

How to check for locks:

## From lokdips utility
smp0018#
cnsterm 6
Input Supervisor Command:
> start lokdisp
Please enter your selection from the list:
> blocker
Which amp(s) do you want to request on (S=Sampling/A=all/C=cancel/Q=quit):
> all
AMP
182 REPORTS 1 LOCK ENTRIES Number
of Blocked Trans displayed : 1
=========================================
Blocked Trans : 30709 0011AB40
Number of blockers displays : 1
Number of blockers exists : 1
Blocker Trans : 7800 00000001
lock mode : Exclusive
lock status : Granted
lock objectType : Database
lock objectID : DBID : 000003ED
: DBNAME : SYSADMIN
: TableID : 00000000,0000
: TableName : ALL
: RowHashS : 00000000
: RowHashE : 00000000

END OF OUTPUT >
##Showlocks
smp0018#
cnsterm 6
Input Supervisor Command:
> start showlocks
smp0018#
cnsterm 1 (<From
Started
'showlocks' in window 1)

DATABASE_ABC
USER DBC MODE Excl AMP All Amps

DATABASE_DEF
USER DBC MODE Excl AMP All Amps

DATABASE_XYZ
USER DBC MODE Excl AMP All Amps


ShowLocks Processing Complete

Note: Showlocks utility will only show a limit of 6 HUT locks database per screen. If you see 6 locking databases on the screen then note likely there are more. Perform the release lock command and run another showlocks and continue the steps until there are no more locks.

What is 7593 error and how to avoid it?



Problem :

What is 7593 error and how to avoid it?

# Below are error text samples from DBQL.

Error limit of 1 is reached; errors logged in ABCD_ERR where ETC_dbql_qid = 163829702788291360.

Error limit of 1 is reached; errors logged in ABCD_ERR where ETC_dbql_qid = 163829702788467780.

Environment/Conditions/Configuration

TDBMS 14.10.06.05

Cause

Error 7593 occurs when error limit logging has been reached on an INSERTSELECT or MERGEINTO query.

Solution

To avoid the error:

1) Increase the error limit.

> For example: "LOGGING ALL ERRORS WITH LIMIT OF <larger number>".

2) Review the query and ensure that no error rows will occur (or within the limit) from the
MERGEINTO statement.

Not able to login from users except DBC


Problem :

Not able to login from users except DBC. Even when the "pdestate a" was displaying "Logons are enabled"

Environment/Conditions/Configuration
ALL TDBMS versions

Cause

When we check the DBC.LogonRules table,there was a default rule for 1024 LogicalHostID which had a LogonStatus as R(Rejected).

This means that no user will be able to login except DBC.

sel UserName
,LogicalHostID
,LogonStatus
from DBC.LogonRules
order by 1,2;

*** Query completed. 6 rows found. 3 columns returned.
*** Total elapsed time was 1 second.

UserName LogicalHostID LogonStatus
Crashdumps
0 G
dbqm 0 G
Default 1024 R
SystemFe 0 G
Sys_Calendar 0 G
tdwm 0 G

Solution

The logon status for Default user in the table "DBC.LogonRules" should be set to 'G'(granted).
Issue the following grant statement to set status to 'G'. grant logon on all as default;

Is it safe to drop old tables in DBC?



Question :

Is it safe to drop old tables in DBC?

Environment/Conditions/Configuration

TDBMS 15.10.01.04b

Answer

Data can be deleted, but do not drop the tables. They are required for restores and migration purposes.

Additional Information
Why are there old versions of dictionary tables ( _V2R6, _TD12, _TD13, _TD14 )?

Q1) Why are there old versions of dictionary tables in DBC?

A1) The old versions of the dictionary tables are there for ARC data migrations. ARC uses these table headers when performing a migration from a previous release of TD.There are two scenarios for these tables:

1) Major upgrades
During major upgrades, the existing dictionary tables have their names changed by tableinit during the first startup on the new major DBS release. The dictionary tables that get recreated by tableinit will have their table names appended with the previous major release. So an upgrade from V2R6 to TD12 would append "_V2R6" to some dictionary table names and then recreate the base dictionary table for that current release. Likewise, then same happens from TD12 to TD13; the tables get appended with "_TD12". Only the table name is being changed by tableinit. All the rows will remain in the old table until the cleanup script is run after the upgrade. When the cleanup script is run, it will only remove the rows from the tables. In this way, the table headers remain on the system.

2) Sysinit
During tableinit after a sysinit, the old versions of the dictionary tables are created as table headers only. For instance, if the system is sysinited at TD14; then tableinit will create the data dictionary tables as well as the required copies of the _TD12 and _TD13 dictionary tables.


Q2) Should the old versions be removed from the system?

A2) No. These tables are required for ARC migrations. If these tables are removed from the system, checktable will report errors and ARC migrations from previous releases will fail!

NOTE: These tables cannot be regenerated without support from the GSC and possibly Engineering. Do not drop the tables without consulting with the GSC DBS Team.


Q3) When will the old versions be removed from the system?

A3) The system will maintain the old dictionary tables for the previous 2 releases. During a major upgrade to TD14, the _V2R6 tables will be removed by the 0010 preconversion script. This will leave the _TD12 tables in place, and the tableinit during TD14 startup will create the _TD13 tables. In the case of these tables remaining after a major
upgrade, the rows will be deleted by the cleanup script that should be run some time after the major upgrade is complete. If rows are seen in these tables, then the cleanup script should be run to drop these rows. This will ensure that the table headers remain on the system, as is required.


Q4) What happens if the old versions are not properly removed by the major upgrade scripts?

A4) If the oldest version is not removed by the 0010 script during the preconversion step, then there will be checktable and scandisk errors seen on the next major release. These errors are innocuous and will NOT affect the system.

What does the post_upgrade_cleanup script do?

1) Deletes the contents of the tables that were renamed by tableinit during the upgrade.
In other words if the upgrade was from TD13 to TD14 all of the dictionary tables name {tablename}_TD13 would have their contents deleted.

The tables are not dropped as these tables and their definition are required for restores/migration purposes. The internal DBS code expects these tables to have a well defined and unique Table ID which is why these tables should never be dropped.
The appropriate _TDxx tables will be dropped during the next major upgrade during the 0010.a.pre_upg.pl processing.

2) Deletes field 8 of the table headers created during table conversion.
On an upgrade when the dictionary tables are converted the conversion has to be restartable.
This is achieved by adding a special restart Field 8 into the table header of these dictionary tables.
After the major upgrade is completed this field 8 causes checktable errors to be be generated.
The post_upgrade_cleanup script calls program called cnvdelfld8 which removes Field 8 from the table headers.

3) When will the old {dictionary_tablename}_TDxx tables be dropped from the system?
The appropriate _TDxx tables will be dropped during the next major upgrade during the 0010.a.pre_upg.pl processing. Since we support migrations from the previous 2 major TD releases there will always be two sets of _TDxx tables on the system at any one time.

How to resolve Reserved Word conflicts?



Question :

The script generates a separate report file called reserved words.rpt. How to resolve Reserved Word conflicts?

Environment/Conditions/Configuration :

TDBMS 15.00.04.02

Answer :

To fix reserved words found in tables, either rename the word or enclose the offending name in double quotes ("word" ).

Additional Information :

dbql_out.rpt: These tables will be dropped in the upgrade. If you wish to retain the data move the data to a new database and rename table or you may just delete data in the tables.

The following query can be ran on the 15.10 environment to show which NEW reserved words are added to 15.10

SELECT * FROM TABLE (SYSLIB.SQLRestrictedWords_TBF()) AS t1 WHERE release_introduced > '15.00' AND category = 'R' ;

How to clear out data from DBQL tables



Objective   ;

The objective is to be able to clear out DBQL (Database Query Logging) tables quickly in order to minimize locking and blocking

Environment/Conditions/Configuration
All versions of Teradata

Procedure

The quickest and least invasive way to clear out DBQL data is as follows. See the additional information section below about more comprehensive methods and additional options.

1) Optional: You can turn off DBQL logging temporarily while doing this procedure then turn it back on when you are done

2) Create an empty table in a database other than DBC and run an insert/select of all of the DBQL data from the original table into the new empty table

3) Run a delete all on the original DBQL table (Example delete from dbc.dbqlogtbl;). Running a delete all uses something called fastpath which avoids using the transient journal and makes the delete complete fast (in a matter of seconds).

4) Reenable DBQL logging if it was turned off in step 1.

Special Considerations
It is best to avoid doing partial deletes from the original DBQL tables (such as deleting based on a date range) because it can potentially lock the table for long periods of time and cause problems with the system trying to write to the DBQL tables at the same time. Deleting by date ranges from the original table can potentially lock the DBQL table for hours and cause other unintended site effects.

Additional Information

More comprehensive methods are available in the Teradata Database Administration Manuals for each Teradata Version.

How to bring VMS and database down



Product Categories :

HW Node/Server,Server Management,Operating System,Server Management,Teradata Database

Problem

Customer needs support to stop the DBS due to the Host Machine system relocation Environment/Conditions/Configuration TDBMS 14.10.04.01


Solution

Here are the steps:
1. Set maintenance window
2. tpareset x move system
3. psh init 0
4. For cmic, vms, tms type nodes you can just right click in smweb and select power off.
5. Arrays can just be powered off on site.

After move, power up equipment. Once all nodes are up, the database should come up.

Verify: psh pdestate a

Failure - 3523 The user does not have EXECUTE PROCEDURE access to PDCRAdmin.PDCR_PPI_Maintenance_All



Objective :

DIPPATCH failure during upgrade:

*** Failure 3523 The user does not have EXECUTE PROCEDURE access to PDCRAdmin.PDCR_PPI_Maintenance_All

Environment/Conditions/Configuration
TDBMS 15.10.02.05

Procedure
 -- > grant execute procedure on PDCRAdmin.PDCR_PPI_Maintenance_All to dbc;

-- >  Rerun DIPPATCH

Deadlock issue found when 'SET QUERY_BAND' submitted



Problem :

SET QUERY_BAND cause deadlock issue, since it need to update sessiontbl.

Environment/Conditions/Configuration
All version

Solution

We can avoid to lock sessiontbl with volatile option.With this change, we can avoid this deadlock.

SET QUERY_BAND=xxxx FOR SESSION VOLATILE;

If it is difficult to apply this change, please consult with your CSR for system level setting.

Error Code - 5603 Errors encountered in compiling UDF/XSP/UDM/UDT/JAR.



Question: -

What does the error message: “5603: Errors encountered in compiling UDF/XSP/UDM/UDT/JAR.” mean?

Answer:

What does the error mean?

There was an internal error that resulted in compiler error. Compiler errors usually occur due to errors in the code that runs the database.

Why does the error occur?
Compilation error refers to a state when a compiler fails to compile a piece of computer program source code, either due to errors in the code, or, more unusually, due to errors in the compiler itself.

What should be done when the error is received?
Please contact your support representative and provide the following:

The offending SQL
If the SQL is not available, you can try looking it up on the DBQLogTbl using the SessionID and/or
the Username For example, in order to get all the queries sent with session number [xxxxx], run the following

query:

SELECT QueryText FROM DBC.DBQLogTbl WHERE SessionID='[xxxxx]';

If SessionID is not available, you can find it by looking up DBC.SessionInfo

Stored procedure definition External stored procedure files may reside on the server or the client

TSET on select * from all objects referenced in the stored procedure

Any underlying stored procedures and Data Definition Language Compilation output