Skip to content

Examples

Typical Structure of a Scheduling Model File

Scheduling schema entities should be defined within schedulingdata section. Likewise modellingdata section is for Advanced Resource Planner entities and systemdata is for any System Schema entities.

scheduling HumanResourcesARP
component SCH;
layer Core;
description "This provides and integration between ARP and IFS HR";
parameters {
}

schedulingdata {
}

modellingdata {
}

systemdata {
}

Once deployed this will create a PL/SQL package named Human_Resources_A_R_P_SCH. This package has to be set against your Dataset for it to be called by the framework.

For newly created packages it will be necessary to refresh Dictionary Cache for it to be available.

If a model contains both scheduling and modelling data, it can be assumed that the scheduling input will need to select 'source data' from the resource planner. This involves sending the following xml with all updates:

<Source_Data>
<source_data_type_id>RAM</source_data_type_id>
</Source_Data>
<Source_Data_Parameter>
<source_data_type_id>RAM</source_data_type_id>
<parameter_name>rota_id</parameter_name>
<parameter_value>MASTER</parameter_value>
</Source_Data_Parameter>

In addition, the modelling schema should include the following xml by default when sending a LOAD:

<RAM_Rota>
<id>MASTER</id>
<ram_rota_type_id>MASTER</ram_rota_type_id>
</RAM_Rota>

Model Activity Entity With A SELECT Statement

Note that the aliases in the following SELECT should match the attributes of the entities in the schema guide.

schedulingdata {
    activity Activity {
        SELECT
            -- mandatory columns
            'IFS' id,
            -- optional columns
            'IFS' activity_class_id,
            'IFS' activity_type_id,
            1 priority,
            1 appointed,
            300 auto_duration,
            1 auto_duration_enabled,
            0.5 base_value,
            'IFS' calendar_id,
            'IFS' colour,
            'IFS' contract_id,
            0.5 cost_of_split,
            'IFS' customer,
            TO_DATE('2017/01/01 12:00:00', 'YYYY/MM/DD HH24:MI:SS') date_time_created,
            TO_DATE('2017/01/01 12:00:00', 'YYYY/MM/DD HH24:MI:SS') date_time_open,
            'IFS' description,
            0.5 do_on_location_incentive,
            300 duration,
            'IFS' end_location_id,
            'IFS' external_ref,
            1 interrupt,
            0.5 interrupt_multiplier,
            1 interrupt_priority,
            'IFS' location_id,
            300 min_split,
            'IFS' modelling_pattern_id,
            1 reactive,
            'IFS' separation_group_id,
            'IFS' service_level,
            1 split_allowed,
            1 split_minimum_priority,
            'IFS' time_zone
        FROM dual;
    }
}

Create An Entity With A SELECT Statement With Static Parameters

If it is necessary to pass fixed parameters to an SQL statement, it can be done as follows.

The parameters have to be first defined in the model.

parameters {
    Company Text(20);
    Site Text(20);
    ArpRota Text(20);
    LunchDeadline Number;
    LocationIdStart Number;
    LocationIdEnd Number;
}

Once the model is deployed the parameters will be available against the dataset.

You might have to refresh if parameters are not available yet.

Then it is possible to use the parameter in the SELECT statements using parameters.ParameterName syntax.

ramskill RAMSkill {
    SELECT DISTINCT
    'S-' || contract id,
    'Site ' || contract description,
    'S-SITE' ram_skill_type_id,
    0 team_inheritable
    FROM employee_uiv
    WHERE company = parameters.Company
    AND contract IS NOT NULL;
}

Create An Entity With A SELECT Statement With Fetched Parameters

If the parameters need to be fetched via a function call then you still need to do the above. Then override function Initialize_Parameters in.plsql file. This will override any static parameter values already set in Dataset Parameters.

@Override

PROCEDURE Initialize_Parameters(
    dataset_id_          IN      VARCHAR2,
    company_             IN OUT  VARCHAR2,
    site_                IN OUT  VARCHAR2,
    arp_rota_            IN OUT  VARCHAR2,
    lunch_deadline_      IN OUT  NUMBER,
    location_id_start_   IN OUT  NUMBER,
    location_id_end_     IN OUT  NUMBER)
IS
BEGIN
    --Add pre-processing code here
    super(dataset_id_, company_, site_, arp_rota_, lunch_deadline_, location_id_start_, location_id_end_);
    company_:= My_API.My_Function();
    --Add post-processing code here
END Initialize_Parameters;

Accessing Dataset Attributes as Parameters

Parameters collection will also include the public attributes of Scheduling Optimization Dataset. They can be accessed using syntax parameters.dataset.AttributeName

skill SkillSite {
    SELECT
        id,
        description,
        skill_type_id
    FROM SCH_SKILL_SITE ss
    WHERE ss.dataset_id = parameters.dataset.DatasetId;
}

Specify The Unit Of Measure For Timespan types

It is possible to specify the unit of measure for timespan fields. In the following example duration is a timespan field. By setting the timespanuom to Days as below the duration field is passed as Days. This can be set to Days Hours Minutes or Seconds. Default is Minutes.

ramtimepattern RAMTimePattern {
    timespanuom Days;
    SELECT
        objkey,
        objkey id,
        base_time,
        duration
        FROM
        (
        SELECT
        -- mandatory columns
        Time_Pers_Diary_Result_API.Get_Objkey(company_id, emp_no, account_date, wage_grp, wage_code, org_code, transaction_type) objkey,
        account_date base_time,
        -- optional columns
        day_value duration
    FROM TIME_PERS_DIARY_RESULT_SUM
    WHERE company_id = parameters.Company
        AND wage_grp_db = 'F'
        AND account_date BETWEEN sysdate AND sysdate + 30
    );
}

Direct change detection when rows can be uniquely identified

When it is possible to identify unique records by way of having objkey in the SELECT statement, direct change handling can be used.

ramshifttemplateset RAMShiftTemplSet {
changedetection {
    trigger on Wage_Class_TAB {
        objkey OBJKEY;
    }
}
    SELECT
        objkey,
        -- mandatory columns
        wage_class id,
        -- optional columns
        wage_class_name description
    FROM wage_class
    WHERE company_id = parameters.Company;
}

Indirect change detection when rows are indirectly monitored

In the following example we monitor TEST_ACTIVITY_PRIORITY_TAB to identify changed rows in TEST_ACTIVITY_TAB.

activity MyActivity {
    changedetection {
        trigger on TEST_ACTIVITY_TAB {
        objkey ACTIVITY_OBJKEY;
        }

        trigger on TEST_ACTIVITY_PRIORITY_TAB {
            columns {
            PRIORITY
            }
            targettable TEST_ACTIVITY_TAB;
            mapping SELECT objkey FROM test_activity WHERE activity_class_id = activity_class_id_;
            arguments {
            ACTIVITY_CLASS_ID
            }
        }
    }
    SELECT
        activity.activity_id id,
        activity.activity_class_id activity_class_id,
        activity_type.activity_type_id activity_type_id,
        Test_Activity_Priority_API.Get_Priority(activity_class_id) priority,
        activity.objkey activity_objkey,
        activity_type.objkey type_objkey
    FROM test_activity activity, test_activity_type activity_type
    WHERE activity.activity_id = activity_type.activity_id;
}

Full Diff When Above Mechanisms Are Not Appropriate

If it is not possible to identify the changed records straight away, then a full diff can be carried out. Note that this is costly in terms of performance.

ramskill RAMSkill3 {
changedetection {
    trigger on Maint_Person_Employee_Tab {
        fulldiffonchange;
    }
}
    SELECT DISTINCT
        'S-' || contract id,
        'Site ' || contract description,
        'S-SITE' ram_skill_type_id,
        0 team_inheritable
    FROM employee_uiv
    WHERE company = parameters.Company
      AND contract IS NOT NULL;
}

Custom change handling when nothing else works

If none of the above mechanisms work then it is possible to write an entirely custom change handling. A new method Shift_Category_Change will be generated. This can be overridden in.plsql to write a custom mechanism.

ramshiftcategory RAMShiftCategory {
    changedetection {
        trigger on SHIFT_INFO_TAB {
            objkey OBJKEY;
        }
    }
    customchangehandler Shift_Category_Change;

    SELECT
        -- mandatory columns
        objkey,
        shift id,
        -- optional columns
        shift_desc description,
        '' colour
    FROM shift_info
    WHERE company_id = parameters.Company;
}

@Override
FUNCTION Shift_Category_Change(
    last_change_id_ IN NUMBER,
    dataset_id_ IN VARCHAR2) RETURN Scheduling_SYS.RAM_Shift_Category_Arr PIPELINED
IS
    rec_ Scheduling_SYS.RAM_Shift_Category_Rec;
--
    CURSOR R_A_M_Shift_Category_Changes IS
    SELECT
    mapping.\*
    FROM (SELECT
            -- mandatory columns
            objkey,
            shift id,
            -- optional columns
            shift_desc description,
            color_info colour
          FROM shift_info
          WHERE company_id = '10') mapping
    WHERE (mapping.OBJKEY IN (SELECT change_objkey FROM scheduling_change_log_tab WHERE table_name='SHIFT_INFO_TAB' AND id &gt; last_change_id_));

BEGIN
    --Add pre-processing code here
    FOR rec1_ IN R_A_M_Shift_Category_Changes LOOP
        rec_.id:= rec1_.id;
        rec_.description:= rec1_.description;
        rec_.colour:= rec1_.colour;
        PIPE ROW (rec_);
    END LOOP;
    RETURN;
END Shift_Category_Change;