The steps types are the following:
The Migrator Tool accept .ini input files with the -configurationIni option. Each .ini file can contains one or more steps.
For more readability, the steps can be specified inside 2 input files:
A step corresponds to a section in the .ini file.
type
and a name
connected with a hyphen. (e.g.[type - name]
)The section type
must be one of the following:
Notes
Since the global step is unique, its id is Global. The global step contains the following properties to consider when they are not specified in the specific steps.
Property name | Description | Values | Default value |
multithread | Number of threads to use for parallel steps | Integer |
|
encoding | Encoding to considerer when reading files | String | UTF-8 |
path.exe.oracle | Oracle's execution path | String | None |
path.exe.mssql | Mssql's execution path | String | None |
sqlmodel.path | Path of the SQL Model JSON file. It can be absolute or relative path [DOCKER] To use this property see path in Docker | String | database/SQLMODEL.json |
Notes
The configuration step contains the following properties to create a connection to the database:
Property name | Description | Values | Default value |
databaseType | The target database type | POSTGRE ORACLE MSSQL | POSTGRE |
dataBaseHost | DNS name or IP address of the database This property can be stored in AWS secret manager, see dataBaseAWSSecretName property [DOCKER] To use this property see DNS name or IP address in Docker | String | localhost |
dataBasePort | The listener port This property can be stored in AWS secret manager, see dataBaseAWSSecretName property | Integer | 5432 |
dataBaseAdditionalArgs | Enable to add connection properties to url. Example: add "?ssl=true" => jdbc:postgresql://localhost:5432/aeat?ssl=true | String | None |
adminDataBaseName | [POSTGRESQL] The name of database to create | String | postgres |
adminDataBaseUser | [POSTGRESQL] The password of super user owner | String | None |
adminDataBasePassword | [POSTGRESQL] The super user owner | String | None |
dataBaseName | The name of database to create This property can be stored in AWS secret manager, see dataBaseAWSSecretName property | String | mydb |
dataBaseServiceName | [ORACLE] The service name of database to create This property can be stored in AWS secret manager, see dataBaseAWSSecretName property | String | The dataBaseServiceName property value |
dataBaseUser | The user used to connect to the database This property can be stored in AWS secret manager, see dataBaseAWSSecretName property | String | None |
dataBasePassword | The password used to connect to the database This property can be stored in AWS secret manager, see dataBaseAWSSecretName property | String | None |
server.host | The license server host for security [DOCKER] To use this property see DNS name or IP address in Docker | String | localhost |
server.port | The license server port for security | Integer | 8888 |
dataBaseDriverPath | Path of the database driver folder. It can be absolute or relative path. Only one driver need to be in this location [DOCKER] To use this property see path in Docker | String | See use default value. |
dataBaseAWSSecretName | AWS Secret Manager stores the database connection details like host, port, user, password, etc., It's to avoid stealing sensitive informations. Data Migrator use this property to refer to Secret Name of the AWS Secret Manager, retrieve database information and establish DB connection. If this property is specified along with other properties (dataBaseHost, dataBasePort, dataBaseName, dataBaseUser, dataBasePassword, dataBaseServiceName) then databaseSecretName information will replace the data directly mentioned in the ini or property file. Refer FAQ to know how to add a customised field (ex: Oracle Service Name) for a database and use it in the Data Migrator | String | None |
dataBaseAWSRegion | AWS Region where the database informations like host, port, user, password, etc., are store in the Secret Name | String | eu-west-3 |
dataBaseCustomConnectionUrl | This property can be used to provide the connection string for the database. If the dataBaseUser and dataBasePassword properties are defined in the configuration files with dataBaseCustomConnectionUrl, the values taken into account for username and password are those defined in the configuration file. If you wish to use only dataBaseCustomConnectionUrl for login, you must remove the dataBaseUser and dataBasePassword properties from the configuration files. [POSTGRES] In case we use the property stepDatabaseKillDropCreate=true, we must keep both properties databaseName and dataBaseUser in the configuration files or AWS Secret Manager. This property can be stored in AWS secret manager with the value dbcustomconnectionurl as key , see dataBaseAWSSecretName property | String | None |
adminDataBaseCustomConnectionUrl | [POSTGRES] This property can be used to provide the connection string for the database. This property can be stored in the AWS Secrets Manager with the value admindbcustomconnectionurl as key, see the dataBaseAWSSecretName property | String | None |
customSqlldrUserId | [Oracle] This optional property can be used to provide the USERID for SQLLDR. This property can be stored in AWS secret manager with the value customSqlldrUserId as key, see the dataBaseAWSSecretName property. This property becomes mandatory, if the dataBaseUser and dataBasePassword properties are not defined in the configuration files and the dataBaseCustomConnectionUrl is used to connect to a database. | String | None |
Property name | Description | Values | Default value |
stepDatabaseKillDropCreate | [POSTGRESQL] Drop and create database | true/false | true |
stepDatabaseCreateScriptPath | [POSTGRESQL] This optional property helps to create a database with custom script. It's value can be relative or absolute path of the SQL file. More information is in this FAQ | String | None |
POSTGRE configuration step example
[Database - cnxPostgreSQL1] adminDataBaseName=postgres adminDataBaseUser=postgres adminDataBasePassword=P@ssw0rd dataBaseHost=localhost dataBasePort=5432 dataBaseName=mydb dataBaseUser=myuser dataBasePassword=mypwd dataBaseAdditionalArgs="?currentSchema=APL002" stepDatabaseKillDropCreate=true stepDatabaseCreateScriptPath=database/database/create-database.sql
The creation or deletion step contains the following properties to consider to execute the SQL targeted scripts.
Property name | Description | Values | Default value |
database | The connection name | String | None |
enable | Whether the step is enabled | true/false | true |
encoding | Encoding to consider when reading files | String | Depends on the Global step |
multithread | Number of threads to use for parallel steps | Integer | Depends on the Global step |
input.folder | The folder containing sql files | String | None |
plainExecution | Whether to consider the total sql file instead of parts separated by ";" | true/false | false |
Deletion step example
[ExecuteSql - Drop tables] input.folder=database/tables-drop database=cnxPostgreSQL1 multithread=1
Creation step example
[ExecuteSql - Create tables] input.folder=database/tables database=cnxPostgreSQL1 multithread=1
The data loading step contains the following properties to consider when loading data to the database.
Property name | Description | Values | Default value |
database | The connection name | String | None |
enable | Whether the step is enabled | true/false | true |
encoding | Data encoding | String | Depends on the Global step |
multithread | Number of threads to use for parallel steps. One thread by table to load. | Integer | Depends on the Global step |
input.folder | The folder containing data:
| String Absolute or relative path folder | None |
mssqlInsertMode | [MSSQL] This optional property can be used to provide the mode to load data. By default Data Migrator use BULK INSERT statement to load data. The INSERT INTO statement is used for the POC projects and BCP utility is used in order to support more types of databases, to use this utility see install BCP utility | BULK_INSERT BCP_UTILITY INSERT_INTO | BULK_INSERT |
zipContentWildcard | Comma-separated list of wildcards used to filter the content of data archive files | Comma-separated list | * |
oracle.disableSqlldr | [ORACLE] Whether or not to disable the SQL Loader step, allowing to generate CSV files only | true/false | false |
oracle.ctlOptionClauseSqlldr | [ORACLE] To specify command-line parameters in the SQL*Loader control file. Refer doc for more information. Provide each parameter and it's value in the key=value format and multi parameters are separated by comma as key1=value1,key2=value2 | Comma-separated key value | None |
nbFileThreads | Number of files to be processed in parallel within a single table thread. For Oracle, triggers the PARALLEL=true SQL Loader option, which is incompatible with already existing constraints on the table to migrate to. | Integer | 1 |
defaultFixedValues | See AS400 Specificity | None | |
convertGraphicDataToFullWidth | To convert a half width to a full width characters for the DB column type 'Graphic'. Default value is set to ‘false’, which refers to original data as in the input file will be migrated. If value is ‘true’, then all the graphic type data will be converted to a full-width characters and trim the extra character if any. | true/false | false |
Property name | Description | Values | Default value |
csvSeparator | Separator between fields | , (Comma) | |
csvQuote | Quote character | ' (Single quote) | |
csvNull | Replacement character for empty value | None | |
csvOneLiner | The database like Oracle or Postgres, possible to have line-feed (new line character) character as part of a column data. In a CSV input file, the line-feed data will spread a single table row data into multiple lines. If the value is set to false, data-migrator reads CSV line by line and pushes data to the database. It is suitable for data without line feed character and gain performance. If the value is set to true, data-migrator able to understand the difference with help of csvQuotes, merge the lines and import the data. Example: Consider, we try to import Additional feature: For NUMERIC columns, comma is replaced by dot to fit with the POSTGRES decimal format. This property is only applicable for Postgres and Oracle database | true/false | false |
csvWithHeader | This property allows you to define whether data files contain a header. | true/false | false |
Property name | Description | Values | Default value |
ebcdicOid | Indicate the presence of a OID before each records. | true/false | true |
ebcdicOidLength | Indicate the length of a OID before each records. | Integer | 2 |
ebcdicOidForLength | Works with ebcdicOid= true and ebcdicOidLength !=0, take oid bytes to compute length of current record instead length deduced from SQLModel.json | true/false | false |
ebcdicZonedMode | Indicate the strategy to apply when handling the Zone-Decimal fields. | EBCDIC_STRICT EBCDIC_MODIFIED AS400 | EBCDIC_STRICT |
ebcdicCodePointShift | Code shift point to apply. See CodeShiftPoint | Integer | 384 |
ebcdicVarcharAsChar | Consider all VARCHAR columns as CHAR while migrating data | true/false | false |
ebcdicByteReplacement | Specify byte replacement list by setting to replace and replacing codes point: Before applying codeshiftpoint offset if any, this enables to convert each byte having the to replace code point in considered encoding by the replacing code point. Example : 30:94|XX:YY With CP037 replace (IRS) by [;] | empty | None |
ebcdicNullByteIndicatorPositionMode | Where to find the Null Byte Indicator (NBI) in provided data files for nullable columns | ALWAYS_LEFT RIGHT_FOR_DATES ALWAYS_RIGHT IGNORED | RIGHT_FOR_DATES |
ebcdicEmptyVarcharMode | Behavior to apply while dealing with VARCHAR columns with a size 0 | LOW_VALUE NULL_VALUE SINGLE_SPACE FILL_SPACE | LOW_VALUE |
ebcdicVarcharKeepPadding | Get all the VARCHAR definition data if provided instead of usefull data provided by the 2 first bytes length.So padding characters can be catched. Cannot be used with ebcdicFilesWithVarcharInVB to true | true/false | false |
ebcdicFilesWithVarcharInVB | For VARCHAR, 2-bytes length are use to read byte instead of use size column definition from SQLModel.json | true/false | false |
ignoreDeletedRecords | Set to true to pre-read the record and test whether or not the bytes are all blanks on the EBCDIC encoding. If blank, tool ignore the record and continue to the next one. This option is useful if REUSDLT option in AS/400 set as YES to reuse deleted records. | true/false | false |
ebcdicCsvDumpFolder | Absolute or relative path of the folder to store the CSV data which extracts from the EBCIDIC. More information is in debug page. | String | None |
onyExtractCsv | To instruct the Data-Migrator only to extract the CSV from the EBCDIC. By default, value of property is “false”. If value is “true” then data migrator will generate CSV and wouldn’t migrate data. This property is depends on “ebcdicCsvDumpFolder” property for the path. If that property value is empty or none, then automatically generate CSV and cutting log file under “data/extract” folder | true/false | false |
temp.folder | Location of temporary files. If empty, let java handle it in user folder. | Absolute folder path | |
keepDebugFiles | Keep temporary files at end of migration. Not available for POSTRESQL since pipe mechanism is used, so use debug.folder instead | true/false | false |
date.format | [ORACLE] Date pattern to add for Date column (DB2) definition | String | None |
time.format | [ORACLE] Timestamp pattern to add for Time column (DB2) definition | String | None |
timestamp.format | [ORACLE] Timestamp pattern to add for Timestamp column (DB2) definition | String | None |
error.maxNb | [ORACLE] Maximal number of tolerated errors before canceling the data migration. | Integer | 50 |
spannedMode | When this option is active, tables containing BLOBs / CLOBs are considered to have been unloaded with the SPANNED YES option. In this mode, the VARCHAR, BLOB and CLOB have a dynamic payload carried by the 2 (resp. 4) bytes preceding the data. Each BLOB is exported to a separate dat file, referenced in the ctl command. CLOBs are exported in a single file, with the CSV delimiter configured, and referenced in the ctl command. For ORACLE only. | true/false | false |
debug.folder | See Solve Loading Trouble (only POSTGRESQL) | String Absolute or relative path folder | None |
mssql_server_temp.folder | [DOCKER] Path of the temp.folder from the server where SQL Server is running | String | temp.folder |
CSV Data loading example
[CSV2DB - Loading1] csvQuote=' csvSeparator=, database=cnxOracle1 encoding=UTF-8 input.folder=datas keepDebugFiles=true multithread=5 temp.folder=dataTemp
EBCDIC Data loading example
[EBCDIC2DB - Loading1] csvQuote=' csvSeparator=, database=cnxOracle1 encoding=CP1145 ebcdicCodePointShift=0 ebcdicNullByteIndicatorPositionMode=ALWAYS_RIGHT ebcdicOid=false ebcdicOidLength=0 ebcdicVarcharAsChar=false ebcdicZonedMode=EBCDIC_STRICT input.folder=data keepDebugFiles=true multithread=10 temp.folder=dataTemp zipContentWildcard=P???
The QDDS conversion step contains the following properties to consider when converting the QDDS data.
Property name | Description | Values | Default value |
database | The connection name | String | None |
enable | Whether the step is enabled | true/false | true |
encoding | Encoding to considerer when reading files | String | Depends on the Global step |
input.folder | The data conversion input folder | String | None |
output.folder | The data conversion output folder | String | None |
ignoreLogicallyDeletedRecords | To ignore a record (row) which is marked logically as deleted | true/false | false |
multithread | Number of threads to use for parallel steps | Integer | Depends on the Global step |
QDDS Conversion step example
[ConvertQDDS - Conversion1] database=cnxPostgreSQL1 encoding=CP037 input.folder=inputs output.folder=data/DataFiles