Data Archiving Considerations¶
This page collects general things to consider for getting Data Archiving into operation.
The following database objects are used:
- Three tables are used to store information about the data archive object. The first table holds general information about the archive object and how it should be archived. Then there is one table for each database table that is involved in the data archive object. At last there is one table for the columns that corresponds to the data archive object tables.
- For the execution of the data archive objects there are three tables involved. The first table holds information about the data archive order, when it should be executed and so on. The second table holds information about which data archive object that should be executed within each data archive order. And the last table holds information about parameters used on the data archive objects.
- The data archive process uses the system service
Data_Arhive_Util_APIfor handling all of the common methods. The data archiving process writes to the data archive log after each execution.
- For each data archive object defined there is a data archive package generated for handling the transportation of the data. The package must be installed in the database otherwise the data archive process will fail. If the data archive destination is another database there must exist one table, for each table in the data archive object, in the destination database. This storage information is also generated.
Data Archiving uses no business logic defined in IFS Cloud. Data Archiving uses only pure sql to handle its own logic.
It is very important to understand that data archiving can do things that are not possible to do from the client or the server interfaces. One example is that data archiving can remove Business object 1 data that is connected to Business object 2 data and due to this removal of Business object 1 data it can be impossible to view or modify the Business object 2. Therefore it is very important that only persons with total understanding of the data model and the business logic designs data archive objects.
Each instance of an archiving object is handled as one transaction. If something goes wrong during the archiving process the whole instance of the archive object is rolled back. Therefore the rollback segment must be large enough for the largest instance of an archive object.
When data archiving is activated it starts a background job, which will be activated at a configured interval. The background job looks if there are any data archiving orders to be executed. If the background job finds any archive orders to execute it takes all instances of the archiving objects and archives them to the data archive destination.
When upgrading refer to Applications Release-notes for a description on which data archive objects to upgrade.
If any of the involved tables in a data archive object is changed, like adding or removing columns, a manual upgrade of the tables on the destination database must be made. There will be no support for upgrade of destination tables. Upgrading destination tables must be handled manually.
If the destination is a file a decision must be made you must decide if you need to have the files upgraded. If you decide to do the upgrade you will have to manually edit the upgrade in the files.
You also must recreate all data archive packages so they support the changes.
- Data Archiving object tables should always contain a primary key.
- Supported data types are data types of type String, Date and Number. All other data types are not supported. By not supported we mean that columns of not supported data types are neglected in data archiving.
- It is no possible to connect to the same table twice in the data archive object hierarchy of tables.
- Tree organized data cannot be archived as one instance of a data archive object. You must archive each parent node in tree hierarchy as an own instance.
- When archiving to data archive destination SQL File there is a limit of 25 tables in a data archive object, due to naming conventions.
- When archiving to destination Oracle table and using a database link, the database link must include a username and password because the data archive process is run in Oracle’s job queue and it only supports database links with username and password. Anonymous database links will result in an error in the data archive process.
It is important to notice that data archiving does not replace normal backup of data, because data archiving can cause data inconsistence if not used correctly.
It’s also important to backup the data archive destinations.