Skip to content

Data Archiving Considerations

This page collects general things to consider for getting Data Archiving into operation.

Data archive objects are intended for transactional business objects like customer orders, financial transactions or invoices, that the system can still work without. Data archive objects are not intended for system data like accounts, customers or products. There are some business objects that are between transactional business objects and system data like product structures, organizational hierarchy or sales parts. Data archive objects can be defined for these objects, but it will be of great importance that these objects have some kind of status attribute that shows that they are not in use anymore.

When defining a data archive object it is important that you keep the definition to one business object. For example: if you want to archive a customer and all of its corresponding customer orders, you must do this as two data archiving objects, customer and customer orders. It’s not suitable to archive this in the same data archive object because it becomes too complex.

Before archiving the data archive objects that you have defined you must be sure of that all areas of IFS Cloud have finished using the object. It’s also of great importance that all statistics (updates and reports), transformation to data warehouse objects (IAL and cubes) and so on have been done.

A data archive object is a definition of one ore more database tables in a tree structure. The tree structure starts with the parent table called the master table. The master table is the driver for the whole data archiving process. You define a where clause for the master table and the data archiving process fetches each master record that fulfills the where clause. For each master the whole tree structure of tables is processed. For each table you can decide what you shall do with the data.

Database Objects

The following database objects are used:

  • Three tables are used to store information about the data archive object. The first table holds general information about the archive object and how it should be archived. Then there is one table for each database table that is involved in the data archive object. At last there is one table for the columns that corresponds to the data archive object tables.
  • For the execution of the data archive objects there are three tables involved. The first table holds information about the data archive order, when it should be executed and so on. The second table holds information about which data archive object that should be executed within each data archive order. And the last table holds information about parameters used on the data archive objects.
  • The data archive process uses the system service Data_Archive_SYS that uses Data_Arhive_Util_API for handling all of the common methods. The data archiving process writes to the data archive log after each execution.
  • For each data archive object defined there is a data archive package generated for handling the transportation of the data. The package must be installed in the database otherwise the data archive process will fail. If the data archive destination is another database there must exist one table, for each table in the data archive object, in the destination database. This storage information is also generated.

Business Logic

Data Archiving uses no business logic defined in IFS Cloud. Data Archiving uses only pure sql to handle its own logic.

It is very important to understand that data archiving can do things that are not possible to do from the client or the server interfaces. One example is that data archiving can remove Business object 1 data that is connected to Business object 2 data and due to this removal of Business object 1 data it can be impossible to view or modify the Business object 2. Therefore it is very important that only persons with total understanding of the data model and the business logic designs data archive objects.

Transactions

Each instance of an archiving object is handled as one transaction. If something goes wrong during the archiving process the whole instance of the archive object is rolled back. Therefore the rollback segment must be large enough for the largest instance of an archive object.

Services

When data archiving is activated it starts a background job, which will be activated at a configured interval. The background job looks if there are any data archiving orders to be executed. If the background job finds any archive orders to execute it takes all instances of the archiving objects and archives them to the data archive destination.

Upgrade

When upgrading refer to Applications Release-notes for a description on which data archive objects to upgrade.

If any of the involved tables in a data archive object is changed, like adding or removing columns, a manual upgrade of the tables on the destination database must be made. There will be no support for upgrade of destination tables. Upgrading  destination tables must be handled manually.

If the destination is a file a decision must be made you must decide if you need to have the files upgraded. If you decide to do the upgrade you will have to manually edit the upgrade in the files.

You also must recreate all data archive packages so they support the changes.

Restrictions

  • Data Archiving object tables should always contain a primary key.
  • Supported data types are data types of type String, Date and Number. All other data types are not supported. By not supported we mean that columns of not supported data types are neglected in data archiving.
  • It is no possible to connect to the same table twice in the data archive object hierarchy of tables.
  • Tree organized data cannot be archived as one instance of a data archive object. You must archive each parent node in tree hierarchy as an own instance.
  • When archiving to data archive destination SQL File there is a limit of 25 tables in a data archive object, due to naming conventions.
  • When archiving to destination Oracle table and using a database link, the database link must include a username and password because the data archive process is run in Oracle’s job queue and it only supports database links with username and password. Anonymous database links will result in an error in the data archive process.

Backup

It is important to notice that data archiving does not replace normal backup of data, because data archiving can cause data inconsistence if not used correctly.

It’s also important to backup the data archive destinations.