The data loading step is used to populate the database.
This page lists the properties to define for setting up the data loading step and includes examples for loading various data types.
Property name | Description | Values | Default value | Mandatory |
input.folder | The folder containing data:
| String Absolute or relative path folder | None | Yes |
database | The identifier corresponds to database configuration step | String | None | Yes |
enable | Whether the step is enabled | true/false | true | No |
encoding | Data encoding | String | Value of property defined in global step | No |
multithread | Number of threads to use for parallel steps. One thread by table to load. | Integer | Value of property defined in global step | No |
zipContentWildcard | Comma-separated list of wildcards used to filter the content of data archive files | Comma-separated list | * | No |
nbFileThreads | Number of files to be processed in parallel within a single table thread. For Oracle, triggers the PARALLEL=true SQL Loader option, which is incompatible with already existing constraints on the table to migrate to. | Integer | 1 | No |
convertGraphicDataToFullWidth | This property determines whether to convert half-width characters to full-width for ‘Graphic’ type columns in the database.
When 'true':
| true/false | false | No |
temp.folder | Directory path for temporary files. If not specified, Data Migrator will automatically create and use a temporary folder within the reverse project directory. | Absolute folder path | None | No |
keepDebugFiles | Keep temporary files at end of migration. Not available for POSTRESQL since pipe mechanism is used, so use ebcdicCsvDumpFolder instead | true/false | false | No |
Property name | Description | Values | Default value | Mandatory |
oracle.disableSqlldr | Whether or not to disable the SQL Loader step, allowing to generate CSV files only | true/false | false | No |
oracle.ctlOptionClauseSqlldr | To specify command-line parameters in the SQL*Loader control file. Refer doc for more information. Provide each parameter and it's value in the key=value format and multi parameters are separated by comma as key1=value1,key2=value2 | Comma-separated key value | None | No |
date.format | Date pattern to add for Date column (DB2) definition | String | None | No |
time.format | Timestamp pattern to add for Time column (DB2) definition | String | None | No |
timestamp.format | Timestamp pattern to add for Timestamp column (DB2) definition | String | None | No |
error.maxNb | Maximal number of tolerated errors before canceling the data migration. | Integer | 50 | No |
spannedMode | When this option is active, tables containing BLOBs / CLOBs are considered to have been unloaded with the SPANNED YES option. In this mode, the VARCHAR, BLOB and CLOB have a dynamic payload carried by the 2 (resp. 4) bytes preceding the data. Each BLOB is exported to a separate dat file, referenced in the ctl command. CLOBs are exported in a single file, with the CSV delimiter configured, and referenced in the ctl command. | true/false | false | No |
Property name | Description | Values | Default value | Mandatory |
mssqlInsertMode | This property can be used to provide the mode to load data. By default Data Migrator use BULK INSERT statement to load data. The INSERT INTO statement is used for the POC projects and BCP utility is used in order to support more types of databases. To use this utility see install BCP utility | BULK_INSERT BCP_UTILITY INSERT_INTO | BULK_INSERT | No |
mssql_server_temp.folder | [DOCKER] Path of the temp.folder from the server where SQL Server is running | String | temp.folder | No |
Property name | Description | Values | Default value | Mandatory |
csvSeparator | Separator between fields | , (Comma) | No | |
csvQuote | Quote character | ' (Single quote) | No | |
csvNull | Replacement character for empty value | None | No | |
csvOneLiner | This property determines how Data Migrator handles multi-line data in CSV files when importing into PostgreSQL or Oracle databases.
Behavior
Usage | true/false | false | No |
csvWithHeader | This property allows you to define whether data files contain a header. | true/false | false | No |
Property name | Description | Values | Default value | Mandatory |
ebcdicOid | Indicate the presence of a OID before each records. | true/false | true | No |
ebcdicOidLength | Indicate the length of a OID before each records. | Integer | 2 | No |
ebcdicOidForLength | Works with ebcdicOid= true and ebcdicOidLength !=0, take oid bytes to compute length of current record instead length deduced from SQLModel.json | true/false | false | No |
ebcdicZonedMode | Indicate the strategy to apply when handling the Zone-Decimal fields. | EBCDIC_STRICT EBCDIC_MODIFIED AS400 | EBCDIC_STRICT | No |
ebcdicCodePointShift | Code shift point to apply. More information is in this FAQ If you use CUSTOM930 in this case, no need to use this property. This charset already includes on its own the Code shift point mechanism. | Integer | 384 | No |
ebcdicVarcharAsChar | Consider all VARCHAR columns as CHAR while migrating data | true/false | false | No |
ebcdicByteReplacement | Specify the byte replacement list by setting to replace and replacing codes point Before applying codeshiftpoint offset if any, this enables to convert each byte having the to replace code point in considered encoding by the replacing code point. Example : 30:94|XX:YY With CP037 replace (IRS) by [;] | empty | None | No |
ebcdicNullByteIndicatorPositionMode | Where to find the Null Byte Indicator (NBI) in provided data files for nullable columns | ALWAYS_LEFT RIGHT_FOR_DATES ALWAYS_RIGHT IGNORED | RIGHT_FOR_DATES | No |
ebcdicEmptyVarcharMode | Behavior to apply while dealing with VARCHAR columns with a size 0 | LOW_VALUE NULL_VALUE SINGLE_SPACE FILL_SPACE | LOW_VALUE | No |
ebcdicVarcharKeepPadding | Get all the VARCHAR definition data if provided instead of usefull data provided by the 2 first bytes length. So padding characters can be catched. Cannot be used with ebcdicFilesWithVarcharInVB to true | true/false | false | No |
ebcdicFilesWithVarcharInVB | For VARCHAR, 2-bytes length are use to read byte instead of use size column definition from SQLModel.json | true/false | false | No |
ignoreDeletedRecords | Set to true to pre-read the record and test whether or not the bytes are all blanks on the EBCDIC encoding. If blank, tool ignore the record and continue to the next one. This option is useful if REUSDLT option in AS/400 set as YES to reuse deleted records. | true/false | false | No |
ebcdicCsvDumpFolder | Absolute or relative path of the folder to store the CSV data which extracts from the EBCIDIC. More information is in debug page. | String | None | No |
onyExtractCsv | To instruct the Data-Migrator only to extract the CSV from the EBCDIC. By default, value of property is “false”. If value is “true” then data migrator will generate CSV and wouldn’t migrate data. This property is depends on “ebcdicCsvDumpFolder” property for the path. If that property value is empty or none, then automatically generate CSV and cutting log file under “data/extract” folder | true/false | false | No |
The following setting illustrates the configuration for CSV data loading in Data Migrator
[CSV2DB - Loading1] csvQuote=" database=cnxOracle input.folder=datas keepDebugFiles=true multithread=5 temp.folder=dataTemp
This configuration section defines:
"
);
)cnxOracle
datas
folder5
threads are used for parallel processingdataTemp
folderThe following setting illustrates the configuration for EBCDIC data loading in Data Migrator
[EBCDIC2DB - Loading1] database=cnxOracle encoding=CP1145 ebcdicCodePointShift=0 ebcdicZonedMode=EBCDIC_MODIFIED input.folder=datas keepDebugFiles=true multithread=10 temp.folder=dataTemp
This configuration section defines:
cnxOracle
cnxOracle
0
EBCDIC_MODIFIED
strategydatas
folder10
threads are used for parallel processingdataTemp
folder