Problem Description:

Teradata TPT full load jobs fail with the below exception from the Azure end when Infoworks is running on Azure platform.

[INFO] 2019-08-15 02:57:02,111 [pool-4-thread-1] infoworks.discovery.utils.TPTScriptGenerator:486 :: Error Output:

FSDataOutputStream#close error: The block list may not contain more than 50,000 blocks. Please see the cause for further information.





Root cause:

This is an exception coming from the Azure end but not from Infoworks end. During the TPT ingestion, the Teradata Parallel transport utility will extract the data from Teradata and will write it into a CSV file. When TPT tries to upload this CSV file which is more than 200GB to Azure BLOB storage, it fails with the above mentioned Azure exception. This is a limitation from the Azure side that we cannot upload a file that is more than 200GB.

To overcome that we should split the file into chunks or increase the number of writers so that the size of a single file will not exceed 200GB.

A block blob can include a maximum of 50,000 blocks. Uncommitted blocks should be committed to fulfill the content or data of the blob. A blob can have a maximum of 100,000 uncommitted blocks at any given time. If this maximum count is exceeded, the service returns status code 409 (RequestEntityTooLargeBlockCountExceedsLimit).


Perform the below steps to resolve this issue.

If the size of the file generated by TPT is more than 200GB, we should split the file into chunks. To achieve this, we need to increase the number of TPT writers at the table level to 15. The default value for the TPT writers is 5 at the table level.

 Increase the number of TPT writers to 15 at table level, run the TPT job again. This should resolve the issue.

Reference Links

Applicable Infoworks EDO2 Versions.