This activity is used to define the datasets that will be used within a company to transfer data to the Scheduling Engine. A dataset will contain information on how data should be scheduled, the type of scheduling that should take place and the time horizon for the scheduling.
There are 4 supported process types:
• Static: Used
for long-term, rough scheduling such as resource capacity for a year
•
Dynamic: Used for short-term, detailed scheduling with the
focus on optimizing the utilization of resources
• Distributed:
A special processing type to allow large volumes of input data to be split
between a smaller dynamic schedule, and one or more static schedules. This
can for example be used to combine dynamic scheduling with capacity
planning.
• Appointment: Used when the Appointment
Booking Engine is requested to generate appointment slots based on available
resources.
The time horizon for which the data should be transferred and scheduled should
be entered in days. For instance, if the number of Scheduling Work
Days is set to 7, all work
tasks, schedules, resources, breaks and HR bookings, for the site(s)
associated with the dataset, will be sent to and scheduled by the scheduling
engine. In addition to the Scheduling Work Days, Appointment Work
Days can also be specified; the Appointment Work Days will
start following the Scheduling Work Days and scheduling in this period will
be less accurate but
faster and is designed to perform for a longer period. For example entering an
Appointment Work
Days duration of 10 days and a Scheduling Work Days duration of 5 days will
make the Scheduling Engine schedule activities for 15 days, from the
current system date, with a high accuracy in the first 5 days and rest with a less accuracy.
A Calendar can be defined on the dataset and will affect
the Scheduling
Work Days duration, which will be determined based on the working days
defined in the calendar
only. For example entering a 5 working days per week calendar, with 7 Scheduling
Work Days and 14 Appointment Work Days will send a total scheduling window of 27 days (7+2
and 14+4 = 27).
This calculation may differ when process type is selected as Distributed (Separate
date calculation is used for date calculation of population period).
You can set up the dataset to be applicable for one or more sites within the company the dataset is associated with. The following data will be transferred for scheduling from the relevant site(s):
Note: In Planning and Scheduling Optimization (PSO), permission groups define which users are allowed to view a set of resources and activities (Work Task Resource Demands). These permission groups are called Object Groups. In the integration, Object Groups have the equivalence of Sites in IFS Cloud. This means that in order for a PSO user to be able to view information like resources and activities belonging to a particular site, the user must be connected to the corresponding Object Group. The object groups for activities and resources are transferred automatically from IFS Cloud to PSO, but object groups for the PSO users must be set up and granted to the users manually from the Scheduling Workbench. The user must also be granted to the dataset itself manually in the Scheduling Workbench. It is possible to turn off the functionality to transfer object groups to scheduling by setting the parameter Object Group Filter to None in Service and Maintenance/Scheduling/Basic Data/Scheduling Configuration.
Default values can be entered for the work tasks in the dataset. From these, the Default Activity Type, Maximum Base Value per Hour and Appointment Scheduling Type must be entered when setting up the dataset. If a work task is missing a primary scheduling type, secondary scheduling type or an activity type, or if it is an appointment work task missing a scheduling type, the default values from the dataset will be assigned automatically to the work task.
If a Location is missing its Do on Location Incentive value, the default value from the dataset will be retrieved to the work task.
It is possible to define the lowest status from which work tasks in the dataset will be transferred for scheduling. The default Schedule from Work Task Status for all datasets is Released and is set from the parameter Dataset Schedule from Work Task Status defined in Scheduling/Basic Data/Scheduling Configuration. The parameter value can be changed if required and the value can also be controlled per dataset.
Values need to be entered for HR activities, i.e., Lunch and Break. The values will be assigned to all lunches and breaks and will be used as input when scheduling the activities.
It is possible to connect a Modelling Dataset ID to the scheduling dataset. A modelling dataset allows combining modelling data configured in the Advanced Resource Planner (ARP) to be used with the scheduling dataset. Modelling datasets are set up in Scheduling Workbench - Administration and the modelling data is set up in Scheduling Workbench - Planning - Data Management. The scheduling dataset must specify a Modelling Dataset ID to extract and utilize the modelling data.
The use of the Schedule Dispatch Service (DSP) can be controlled from the dataset, The Schedule Dispatch Service will automatically commit/uncommit (assign/unassign) work task activities, once they have been allocated in PSO, and the broadcast for this allocation type will be used to communicate with the DSP. For the DSP to operate, it requires a rule set. These rules are defined and stored in the Planning Workspace under General Data/Rules in the PSO Workbench.
Plannable Task Types
The dataset controls which task types are sent for scheduling:
Child Dataset
If Process type is selected as Distributed, the user can enter data into the Child Dataset tab. One Primary dataset with Process Type Dynamic/Appointment must be defined. It is possible to define one or many static datasets. Each of these static datasets can define an override schedule start time and/or schedule end time. If specified these will override the value from the parent dataset input reference. Usually, the dynamic schedule will have a schedule end time override to limit the dynamic scheduling window, but no start time override (since this should update with the live updates).