Define Scheduling Dataset

Explanation

This activity is used to define the datasets that will be used within a company to transfer data to the Scheduling Engine. A dataset will contain information on how data should be scheduled, the type of scheduling that should take place and the time horizon for the scheduling.

There are two Scheduling Methods:

It is possible to connect a Modelling Dataset ID to the scheduling dataset. A modelling dataset allows combining modelling data configured in the Advanced Resource Planner (ARP) to be used with the scheduling dataset. Modelling datasets are set up in Scheduling Workbench - Administration and the modelling data is set up in Scheduling Workbench - Planning - Data Management. The scheduling dataset must specify a Modelling Dataset ID to extract and utilize the modelling data.

Input Reference

There are 5 supported process types for Automated Scheduling:

The time horizon for which the data should be transferred and scheduled should be entered in days. For instance, if the number of Scheduling Work Days is set to 7, all work tasks, schedules, resources, breaks and HR bookings, for the site(s) associated with the dataset, will be sent to and scheduled by the scheduling engine. In addition to the Scheduling Work Days, Appointment Work Days can also be specified; the Appointment Work Days will start following the Scheduling Work Days and scheduling in this period will be less accurate but faster and is designed to perform for a longer period. For example entering an Appointment Work Days duration of 10 days and a Scheduling Work Days duration of 5 days will make the Scheduling Engine schedule activities for 15 days, from the current system date, with a high accuracy in the first 5 days and rest with a less accuracy. A Calendar can be defined on the dataset and will affect the Scheduling Work Days duration, which will be determined based on the working days defined in the calendar only. For example entering a 5 working days per week calendar, with 7 Scheduling Work Days and 14 Appointment Work Days will send a total scheduling window of 27 days (7+2 and 14+4 = 27). This calculation may differ when process type is selected as Distributed (Separate date calculation is used for date calculation of population period).

The Time Zone is used to control dates and times for all scheduling related data for the dataset.

The Configuration ID controls to which scheduling server the scheduling data should be sent. The configuration is defined in Scheduling Optimization Configuration.

Work Task and Resource

Default values can be entered for the work tasks/requests in the dataset. From these, the Default Scheduling Activity TypeMaximum Base Value per Hour and Appointment Scheduling SLA Type must be entered when setting up the dataset. If a work task/request is missing a Primary Scheduling SLA Type, Secondary Scheduling SLA Type or an Activity Type, or if it is an appointment Work Task missing a Scheduling SLA Type, the default values from the dataset will be assigned automatically to the work task/request.

If a Location is missing its Do on Location Incentive value, the default value from the dataset will be retrieved to the work task.

It is possible to define the lowest status from which work tasks in the dataset will be transferred for scheduling. The default Schedule from Work Task Status for all datasets is Released and is set from the parameter Dataset Schedule from Work Task Status defined in Scheduling/Basic Data/Scheduling Configuration. The parameter value can be changed if required and the value can also be controlled per dataset.

Values need to be entered for HR activities, i.e., Lunch and Break. The values will be assigned to all lunches and breaks and will be used as input when scheduling the activities.

Plannable Task Types

The dataset controls which task types are sent for scheduling:

Broadcast Parameters

Minimum Plan Quality controls the initial broadcast after a LOAD, which means that the broadcast is sent from PSO to Cloud when it reaches the defined Minimum Plan Quality, or the defined Maximum Wait Minutes, whichever occurs first. If there is no Maximum Wait Minutes defined, then the plan is broadcast when the quality target is met. The default value is 80%.

Maximum Frequency Minutes is the time that PSO will wait to send a change broadcast when there are any changes in the PSO plan. The Maximum Frequency Minutes defines the minimum amount of time that must elapse from the previous broadcast, before a new update is sent. Setting the value to 1 Minute means that PSO will wait at least 1 minute before sending the next broadcast. The default is 1 minute.

Time filters for the main broadcast, excluding planned break details can be defined by using the Time Filter Start and Time Filter End offset parameters, that represent days from the current date. Time Filter Start will be defaulted to 0, which will broadcast data from the current day. Time Filter End will be defaulted to the Scheduling Work Days set on the dataset, but can be adjusted. If Time Filter Start is set to 0 and Time Filter End is set to 3, the broadcast will include data for the current day and 3 days ahead. If we assume that you set these values when the current date was 2023-11-01, the broadcast would include data between 2023-11-01 and 2023-11-04.

The use of the Schedule Dispatch Service (DSP) can be controlled from the dataset, The Schedule Dispatch Service will automatically commit/uncommit (assign/unassign) work task activities, once they have been allocated in PSO, and the broadcast for this allocation type will be used to communicate with the DSP. For the DSP to operate, it requires a rule set. These rules are defined and stored in the Planning Workspace under General Data/Rules in the PSO Workbench.

Output Schedule Exceptions controls the output of schedule exception data. If enabled, schedule exceptions raised by the scheduling engine will be synchronized back to the Schedule Exception screen in Service / Scheduling / Schedule Exception and be visible in the Dispatch Console. It is important to bear in mind that this will affect the time it takes to process messages coming back from scheduling and should only be used if Schedule Exceptions are used outside of the Scheduling Workbench.

Output Plan Travel controls the output of planned travel records. If enabled, planned travel calculations from the scheduling engine will be synchronized back to the Dispatch Console. It is important to bear in mind that this will affect the time it takes to process messages coming back from scheduling and should only be used if visibility of planned travel time calculations is important in the Dispatch Console.

Output Plan Break controls the output of planned implicit breaks. If enabled, details about implicit breaks for the current day, planned by the scheduling engine, will be synchronized back to the Resource Availability and Dispatch Console. It is important to bear in mind that this will affect the time it takes to process messages coming back from scheduling and should only be used if it's important to see real-time break plans in the Dispatch Console. This only affects Implicit Breaks. Implicit breaks can be enabled by setting Parameter IMPLICIT_BREAKS to Yes on the Dataset or by setting the same parameter to yes in the Scheduling Configuration page.

Additional settings

You can set up the dataset to be applicable for one or more sites within the company the dataset is associated with through the Site tab. The following data will be transferred for scheduling from the relevant site(s):

Note: In Planning and Scheduling Optimization (PSO), permission groups define which users are allowed to view a set of resources and activities (Work Task Resource Demands). These permission groups are called Object Groups. In the integration, Object Groups are equal to Sites in IFS Cloud. This means that in order for a PSO user to be able to view information like resources and activities belonging to a particular site, the user must be connected to the corresponding Object Group. The object groups for activities and resources are transferred automatically from IFS Cloud to PSO, but object groups for the PSO users must be set up and granted to the users manually from the Scheduling Workbench. The user must also be granted to the dataset itself manually in the Scheduling Workbench. It is possible to turn off the functionality to transfer object groups to scheduling by setting the parameter Object Group Filter to None in Service and Maintenance/Scheduling/Basic Data/Scheduling Configuration.

It is possible to override certain global scheduling parameters that have been defined in on the Scheduling Configuration page. In the tab Parameters values flagged as Dataset can be changed for a specific dataset e.g. you can choose to only use implicit breaks for one dataset.

In the Attributes tab it is possible to define additional attributes that should be sent from IFS Cloud to PSO. Attributes are supported for Resources, Request Activities (i.e., work tasks associated with requests) and Work Task Activities (i.e., work tasks associated with work orders). The attribute name is the name that is referenced in PSO (for example in lists), attribute value displays the available columns of the selected entity in IFS Cloud. It is possible to send values not defined by default, by setting the Attribute Value to #CUSTOM_VALUE# and then specifying an expression or a method in the Custom Value Expression field. The Custom Value Expression field uses Oracle methods and expressions.

Child Dataset

If Process type is selected as Distributed, the user can enter data into the Child Dataset tab. One Primary dataset with Process Type Dynamic/Appointment must be defined. It is possible to define one or many static datasets. Each of these static datasets can define an override schedule start time and/or schedule end time. If specified these will override the value from the parent dataset input reference. Usually, the dynamic schedule will have a schedule end time override to limit the dynamic scheduling window, but no start time override (since this should update with the live updates).

Prerequisites

System Effects