Source Migration¶
Source Migration use to migrate data from one data source to another. The source can be a TABLE, VIEW, JOIN, DB LINK or UNION, while destination can be a single or multiple logical units.
The common scenarios of the job type are
- Migrate data from another database using a DB_LINK.
- As the second part of a file Migration Job
Create a Source Migration Job¶
In the Create Source Migration Job process,
- Database Tab uses to specify the source of the data.
- Method List Tab use to specify the destination.
- Source columns Tab specifies which columns need to migrate.
- Rules Tab to defines the behaviour of job creation and execution.
Create Job Header¶
- Create a new Migration Job with Unique Job ID and Description. Use Migrate Source Data as the Procedure.
- Check the Source Name. Use this field to mention the source. it could be multiple tables (JOIN), a DB_LINK. Pay attention to the where clause too. This will filter your results.
- Use source mapping to create the mapping with the source columns and destination. If the view name/source name or method list has given, the rows will populate upon saving the migration job. But always it is possible to add, delete or updates wanted columns there.
- If the Source Mapping tab is blank or lacking the column you need to map: Check if the columns could be easily inserted by adding a new method in the method list. It is preferred to allow IFS/Data Migration to automatically create the columns in this window, and then configure later. If automatic creation is impossible, create those manually.
- Use Default values for the mandatory/Required columns. Otherwise, it will give errors at the time of execution.
Read more on Source Mapping >>
Method List¶
When inserting a new method in the method list, the system will automatically fill in column mappings in the Source Mapping tab. Automatic creation of the column mappings will only be executed if the rule CREATEDET is set to Active for the job, which is the default by the installation.
The columns from the view will also be displayed by using the command, Method List Attributes. The columns here represent the ATTR-string that will be created when the method is executed. If you have entered a specific procedure or function in Method Name, then the IN/OUT-parameters of the method will be displayed in the Source Mapping tab and under RMB Method List Attributes.
For more details on Method List >>
Finally, set the wanted Rules. Then you'll ready with the Source Migration Job.
Execute a Source Migration Job¶
Source Migration Job can start directly from the Execute Job window or schedule to run in the background.
Before Executing the job, check for available input Parameters or default values in the Defaults Tab. This also allows you to use the same job to transfer data to i.e. different sites or different companies by running the same job twice or several times with different default parameters. On jobs with Direction 'Out', you can also enter a Default Where in order to fetch the correct selection of data for this job.
Now you are ready to execute the job.
Read more information on Execute Job form.
Once executed, check the log for execution details. If there are any errors have occurred, you will find rows with error messages under the Detail tab of window Execute Job. You will find error messages from the business logic here, and these error messages together with the content of the attribute string normally give enough information to identify the problem. Fix the problems and restart the job.
High Volume Data Migration¶
This approach is designed to reduce the time required for migrating large volumes of data into IFS Cloud and increase overall performance. It uses several Oracle techniques that work together to make the process faster and more efficient:
- Parallel Execution: Instead of processing the data one piece at a time, the system runs multiple tasks at the same time, allowing different parts of the data to be migrated simultaneously. This significantly reduces the overall time required to migrate the data.
- Bulk Collect: This technique allows the system to fetch multiple rows of data at once, rather than one row at a time. This reduces the number of times the system needs to access the database, which speeds up the process.
- FORALL: Once the data is collected in bulk, it is inserted into the target tables in large batches. This reduces the number of individual insert actions, making the process much faster.
The migration process can be done in two phases:
- Creating staging tables from data files.
- Migrating data from staging tables to IFS Cloud.
Creating Staging Tables from data files¶
- Create a file migration job to create staging tables from data files.
- Use the procedure CREATE_TABLE_FROM_FILE.
- Choose the OnServer option to load the data file. Create an Oracle directory in database server and copy the file into server. Refer to File Path - Oracle Directory for instructions on creating an Oracle Directory Object. Managed cloud customers can request FTP location (mapped to the Oracle directory) for uploading the data files by raising a support ticket.
- Refer Create Table From File for more details.
- Enable the EXTTABLE rule to create the external table.(It will first create an External Table and then insert data into the IC table.)
Migrating data from staging tables to IFS Cloud¶
- Create a new Migration Job with the procedure MIGRATE_HIGH_VOLUME_DATA and with the relevant view.
- In the DATABASE INFORMATION tab add the source table name created from the file migration job.
- In the SOURCE MAPPING tab, make sure all the mandatory columns are mapped or assigned default values. Read more on Source Mapping >>
- In the RULES tab, set the values for following rules.
- PARALLELLEVEL - Specifies the number of parallel processes to run simultaneously.
- BULKLIMIT - Defines the number of records processed in each batch within the migration.
- CHUNKSIZE - Controls how many records are processed per chunk.
The ideal values for PARALLELLEVEL, BULKLIMIT, and CHUNKSIZE depend on several factors: - The size of the data set to be migrated. - The complexity and type of the target table. - The available system resources during the migration.
There is no one-size-fits-all configuration for every table or data size. It is recommended to test different configurations to find the optimal settings for your specific migration.
Example: These configurations have been found to deliver optimal performance (less execution time) for the following target tables.
Target View | No: of records | No: of columns | PARALLELLEVEL | BULKLIMIT | CHUNKSIZE |
---|---|---|---|---|---|
Part Catalog | 1 million | 37 | 15 | 20000 | 20000 |
Inventory Part | 1 million | 100 | 20 | 5000 | 10000 |
- Execute the Migration Job in background and check Last Info field to view the output. (Execute via online is not recommended.)
Limitations compared to MIGRATE_SOURCE_DATA¶
The purpose of designing this procedure is to minimize migration time and enhance overall performance. However, to achieve these improvements, the procedure comes with certain limitations compared to the MIGRATE_SOURCE_DATA procedure.
-
Record Insertion Only: The procedure only allows inserting records; update is not supported.
-
No Support for adding methods: The migration job created with MIGRATE_HIGH_VOLUME_DATA procedure does not support method list functionality. It allows only migrating data to the view that is defined in the migration job header.
-
No WHERE, ORDER BY, or GROUP BY Clauses: The procedure does not support WHERE, ORDER BY, or GROUP BY clauses. The source data is migrated as it is.
-
Restricted Rules: Only the MIGRATE_HIGH_VOLUME_DATA procedure-specific rules (BULKLIMIT, CHUNKSIZE, and PARALLELLEVEL) are supported. Other rules are not applicable.
-
Limited Error Message Details: The procedure does not provide detailed error messages(record by record).
-
Limited Data Validation: Data validation in this procedure is restricted to certain areas, offering fewer validation checks compared to the MIGRATE_SOURCE_DATA procedure.
- Note: If the data file contains issues such as incorrect data types or data length violations, the staging tables will not be created.
References¶
Following are some useful references.
- Migration Job form.
- Execute Job form.
- Data To multiple LUs.
- Connected Jobs.
- Use of DB Links & Joins.
- Conditions on Method Execution.
- Parse Variables Between Methods.
- Method List Procedures.
- Monitor Job.
- Update tables Directly, parse ATTR strings.