The steps types are the following:
The Migrator Tool accept .ini input files with the -configurationIni option. Each .ini file can contains one or more steps.
For more readability, the steps can be specified inside 2 input files:
A step corresponds to a section in the .ini file.
type
and a name
connected with a hyphen. (e.g.[type - name]
)The section type
must be one of the following:
Notes
Since the global step is unique, its id is Global. The global step contains the following properties to consider when they are not specified in the specific steps.
Property name | Description | Values | Default value |
multithread | Number of threads to use for parallel steps | Integer |
|
encoding | Encoding to considerer when reading files | String | UTF-8 |
path.exe.oracle | Oracle's execution path | String | None |
path.exe.mssql | Mssql's execution path | String | None |
sqlmodel.path | Path of the SQL Model JSON file. It can be absolute or relative path [DOCKER] To use this property see path in Docker | String | database/SQLMODEL.json |
Notes
The configuration step contains the following properties to create a connection to the database:
Property name | Description | Values | Default value |
databaseType | The target database type | POSTGRE ORACLE MSSQL | POSTGRE |
dataBaseHost | DNS name or IP address of the database [DOCKER] To use this property see DNS name or IP address in Docker | String | localhost |
dataBasePort | The listener port | Integer | 5432 |
dataBaseAdditionalArgs | Enable to add connection properties to url. Example: add "?ssl=true" => jdbc:postgresql://localhost:5432/aeat?ssl=true | String | None |
adminDataBaseName | [POSTGRESQL] The name of database to create | String | postgres |
adminDataBaseUser | [POSTGRESQL] The password of super user owner | String | postgres |
adminDataBasePassword | [POSTGRESQL] The super user owner | String | P@ssw0rd |
dataBaseName | The name of database to create | String | mydb |
dataBaseServiceName | [ORACLE] The service name of database to create | String | The dataBaseServiceName property value |
dataBaseUser | The user used to connect to the database | String | myuser |
dataBasePassword | The password used to connect to the database | String | mypwd |
server.host | The license server host for security [DOCKER] To use this property see DNS name or IP address in Docker | String | localhost |
server.port | The license server port for security | Integer | 8888 |
dataBaseDriverPath | Path of the database driver folder. It can be absolute or relative path. Only one driver need to be in this location [DOCKER] To use this property see path in Docker | String | See use default value. |
Property name | Description | Values | Default value |
stepDatabaseKillDropCreate | [POSTGRESQL] Drop and create database | true/false | true |
POSTGRE configuration step example
[Database - cnxPostgreSQL1] adminDataBaseName=postgres adminDataBaseUser=postgres adminDataBasePassword=P@ssw0rd dataBaseHost=localhost dataBasePort=5432 dataBaseName=mydb dataBaseUser=myuser dataBasePassword=mypwd dataBaseAdditionalArgs="?currentSchema=APL002" stepDatabaseKillDropCreate=true
The creation or deletion step contains the following properties to consider to execute the SQL targeted scripts.
Property name | Description | Values | Default value |
database | The connection name | String | None |
enable | Whether the step is enabled | true/false | true |
encoding | Encoding to consider when reading files | String | Depends on the Global step |
multithread | Number of threads to use for parallel steps | Integer | Depends on the Global step |
input.folder | The folder containing sql files | String | None |
plainExecution | Whether to consider the total sql file instead of parts separated by ";" | true/false | false |
Deletion step example
[ExecuteSql - Drop tables] input.folder=database/tables-drop database=cnxPostgreSQL1 multithread=1
Creation step example
[ExecuteSql - Create tables] input.folder=database/tables database=cnxPostgreSQL1 multithread=1
The data loading step contains the following properties to consider when loading data to the database.
Property name | Description | Values | Default value |
database | The connection name | String | None |
enable | Whether the step is enabled | true/false | true |
encoding | Data encoding | String | Depends on the Global step |
multithread | Number of threads to use for parallel steps. One thread by table to load. | Integer | Depends on the Global step |
input.folder | The folder containing data:
| String Absolute or relative path folder | None |
zipContentWildcard | Comma-separated list of wildcards used to filter the content of data archive files | Comma-separated list | * |
oracle.disableSqlldr | [ORACLE] Whether or not to disable the SQL Loader step, allowing to generate CSV files only | true/false | false |
nbFileThreads | Number of files to be processed in parallel within a single table thread. For Oracle, triggers the PARALLEL=true SQL Loader option, which is incompatible with already existing constraints on the table to migrate to. | Integer | 1 |
defaultFixedValues | See AS400 Specificity | None |
Property name | Description | Values | Default value |
csvSeparator | Separator between fields | , (Comma) | |
csvQuote | Quote character | ' (Single quote) | |
csvNull | Replacement character for empty value | None | |
csvOneLiner | The database like Oracle or Postgres, possible to have line-feed (new line character) character as part of a column data. In a CSV input file, the line-feed data will spread a single table row data into multiple lines. If the value is set to false, data-migrator reads CSV line by line and pushes data to the database. It is suitable for data without line feed character and gain performance. If the value is set to true, data-migrator able to understand the difference with help of csvQuotes, merge the lines and import the data. Example: Consider, we try to import Additional feature: For NUMERIC columns, comma is replaced by dot to fit with the POSTGRES decimal format. This property is only applicable for Postgres and Oracle database | true/false | false |
Property name | Description | Values | Default value |
ebcdicOid | Indicate the presence of a OID before each records. | true/false | true |
ebcdicOidLength | Indicate the length of a OID before each records. | Integer | 2 |
ebcdicOidForLength | Works with ebcdicOid= true and ebcdicOidLength !=0, take oid bytes to compute length of current record instead length deduced from SQLModel.json | true/false | false |
ebcdicZonedMode | Indicate the strategy to apply when handling the Zone-Decimal fields. | EBCDIC_STRICT EBCDIC_MODIFIED AS400 | EBCDIC_STRICT |
ebcdicCodePointShift | Code shift point to apply. See CodeShiftPoint | Integer | 384 |
ebcdicVarcharAsChar | Consider all VARCHAR columns as CHAR while migrating data | true/false | false |
ebcdicByteReplacement | Specify byte replacement list by setting to replace and replacing codes point: Before applying codeshiftpoint offset if any, this enables to convert each byte having the to replace code point in considered encoding by the replacing code point. Example : 30:94|XX:YY With CP037 replace (IRS) by [;] | empty | None |
ebcdicNullByteIndicatorPositionMode | Where to find the Null Byte Indicator (NBI) in provided data files for nullable columns | ALWAYS_LEFT RIGHT_FOR_DATES ALWAYS_RIGHT IGNORED | RIGHT_FOR_DATES |
ebcdicEmptyVarcharMode | Behavior to apply while dealing with VARCHAR columns with a size 0 | LOW_VALUE NULL_VALUE SINGLE_SPACE FILL_SPACE | LOW_VALUE |
ebcdicVarcharKeepPadding | Get all the VARCHAR definition data if provided instead of usefull data provided by the 2 first bytes length.So padding characters can be catched. Cannot be used with ebcdicFilesWithVarcharInVB to true | true/false | false |
ebcdicFilesWithVarcharInVB | For VARCHAR, 2-bytes length are use to read byte instead of use size column definition from SQLModel.json | true/false | false |
ignoreDeletedRecords | Set to true to pre-read the record and test whether or not the bytes are all blanks on the EBCDIC encoding. If blank, tool ignore the record and continue to the next one. This option is useful if REUSDLT option in AS/400 set as YES to reuse deleted records. | true/false | false |
temp.folder | Location of temporary files. If empty, let java handle it in user folder. | Absolute folder path | |
keepDebugFiles | Keep temporary files at end of migration. Not available for POSTRESQL since pipe mechanism is used, so use debug.folder instead | true/false | false |
date.format | [ORACLE] Date pattern to add for Date column (DB2) definition | String | None |
time.format | [ORACLE] Timestamp pattern to add for Time column (DB2) definition | String | None |
timestamp.format | [ORACLE] Timestamp pattern to add for Timestamp column (DB2) definition | String | None |
error.maxNb | [ORACLE] Maximal number of tolerated errors before canceling the data migration. | Integer | 50 |
spannedMode | When this option is active, tables containing BLOBs / CLOBs are considered to have been unloaded with the SPANNED YES option. In this mode, the VARCHAR, BLOB and CLOB have a dynamic payload carried by the 2 (resp. 4) bytes preceding the data. Each BLOB is exported to a separate dat file, referenced in the ctl command. CLOBs are exported in a single file, with the CSV delimiter configured, and referenced in the ctl command. For ORACLE only. | true/false | false |
debug.folder | See Solve Loading Trouble (only POSTGRESQL) | String Absolute or relative path folder | None |
mssql_server_temp.folder | [DOCKER] Path of the temp.folder from the server where SQL Server is running | String | temp.folder |
CSV Data loading example
[CSV2DB - Loading1] csvQuote=' csvSeparator=, database=cnxOracle1 encoding=UTF-8 input.folder=datas keepDebugFiles=true multithread=5 temp.folder=dataTemp
EBCDIC Data loading example
[EBCDIC2DB - Loading1] csvQuote=' csvSeparator=, database=cnxOracle1 encoding=CP1145 ebcdicCodePointShift=0 ebcdicNullByteIndicatorPositionMode=ALWAYS_RIGHT ebcdicOid=false ebcdicOidLength=0 ebcdicVarcharAsChar=false ebcdicZonedMode=EBCDIC_STRICT input.folder=data keepDebugFiles=true multithread=10 temp.folder=dataTemp zipContentWildcard=P???
The QDDS conversion step contains the following properties to consider when converting the QDDS data.
Property name | Description | Values | Default value |
database | The connection name | String | None |
enable | Whether the step is enabled | true/false | true |
encoding | Encoding to considerer when reading files | String | Depends on the Global step |
input.folder | The data conversion input folder | String | None |
output.folder | The data conversion output folder | String | None |
QDDS Conversion step example
[ConvertQDDS - Conversion1] database=cnxPostgreSQL1 encoding=CP037 input.folder=inputs output.folder=data/DataFiles