Skip to content

Job manager

The aim of this HOWTO document is to describe the best possible manner in which to control the Job Manager for authoring well-designed jobs, batches of actions that are executed as a single unit of work, executed either as an ad-hoc job or a job that is executed on a scheduled basis. Jobs are an integral part of nearly all real-time or operational applications of the MIKE Workbench.

Although many of the other HOWTO documents aim at programming with or for the Workbench, this HOWTO focuses on applying existing functionality and is therefore closer to end-user documentation. That said, authoring jobs is in many ways similar to programming, perhaps at a more abstract level through dragging and dropping functionality components from a palette up to a workflow-oriented structure, but it still deals with designing a flow of actions and implementing them through writing jobs in a language specifically for jobs.

This HOWTO document consists of three sections, with the first section defining the anatomy of a job, the structure and the most important types of elements. The second part incorporates working with jobs, consisting of answers to a series of frequently occurring questions and finally the document ends with a section on how the user interface supports the user in working with jobs.

Job environment

Jobs within the MIKE Workbench are used as a means to batch commands together and have them executed or scheduled for execution as a single work unit. A scenario where jobs are very useful is with automatic and scheduled execution of model simulations. It is often not enough to just execute the actual simulation but could require pre-processing of input time series for the simulation and post-processing to extract information from the simulation results, check key indicators and possibly submit alerts and notifications

The main part of this HOWTO describes jobs from an XML perspective. XML is the source code format for jobs. This does not imply that user will have to manually write the XML, all job authoring as well as job execution happens from a graphical user interface. The last section in this document describes how to use the user interface but the job concepts are better explained from a source code oriented perspective.

A job is an XML structure that contains a list of tasks that are executed in a pre-defined order. The tasks typically execute both well-defined and atomic actions like for example, importing a time series to the Workbench database, running a scenario or performing a GIS zonal statistics analysis. The user defines the execution order of the tasks through targets where a target is simply a list of tasks. Execution-wise targets can depend on each other and thus form a semi-hierarchical structure.

Listing 1 below shows a simple job that imports a time series from a DFS0 file, finds the Managern and finally writes that value to a disk file.

Listing 1 sample job source

The following section describes targets, tasks and properties as well as task input and output in more detail.

The main points of interest that should be noted from the listing are:

  • Definition of job properties (point 1) which are key/value pairs that can be used

  • Definition of a job target (point 2) which are containers for related tasks

  • Definition of job tasks (point 3, 4 and 5) which are units of work.

These three element types, which basically constitute the vast majority of tasks, will be described in more detail in the sections that follow.

Figure 1 below displays how the same job appears in the MIKE Workbench user interface. Notice how the main window in the user interface shows the target and tasks in a tree-like structure, while the attributes are displayed as standard MIKE WORKBENCH properties.

Figure 1 Sample job seen from the user interface

What is a target?

A target is nothing more than a grouping mechanism for tasks and is defined by the following attributes:

  • A name

  • An optional semicolon-separated list of other targets that need to be successfully executed before execution of the target can commence

  • An optional condition that determines under which circumstances the target shall be executed.

The last two types of attributes are discussed in more detail in the Working with Jobs section later in this document.

The formal target XML format is shown in Listing 2.

Listing 2 Target XML format

The name attribute is mandatory while the DependsOnTargets attribute is optional. See more about the latter in the section How to have targets depending on each other?

The user defines the above attributes when adding the target to the job through the Job Manager’s user interface.

What is a task?

A task can be compared to a subroutine in a programming language – it has some input parameters, performs a well-defined action, can have output parameters and returns a status value indicating whether the action has been successfully executed.

In itself a task is a piece of binary code implemented as a C# class or other Microsoft .NET programming language. The Workbench comes with a long list of tasks for managing time series, scenarios, spreadsheets, disk files etc. Solution implementing projects can easily add their own custom-made tasks to this list. Implementing a task is typically a very simple programming function. It is even possible to create a Workbench embedded script and have this executed as part of a job. In this way, practically all the functionality provided by the Workbench is available when setting up jobs.

The following should be noted from the example in Listing 1:

  • How the task ImportTimeseries at point 1 specifies the input as task properties. In this case the SourceFile and DestinationGroup properties define the time series that will be imported and the path to where it shall be inserted in the database.

  • How the MaximumTimeseriesValue task has one input property, the path to the newly imported time series and one output property (or output element) which will hold the maximum value of the time series when the task has executed.

  • How the WriteLinesToFile task writes the output value from the previous executed task to a disk file.

The formal XML syntax is shown in Listing 3 below and is exemplified in Listing 1.

Listing 3 Task XML format

Note that a task can:

  • Have zero or more task input parameter attributes, some being mandatory and some optional.

  • Have zero or more output elements which can be transformed into job properties, as shown in the example in

  • Have an optional Condition processing instruction that determines under which circumstances the target will be executed. See section How to conditionally execute targets for detailed information.

  • Have an optional ContinueOnError processing instruction that determines if the job will halt or continue should the task fail. See Listing 12 Workaround for passing information between targets with CallTarget.

  • The calling target – Target1- writes the Prop1 property to a file which is then read by Target2. It is not elegant but gets the job done.

  • Continue despite failing tasks for detailed instructions.

The last three types of attributes are discussed in more detail in the Working with Jobs section later in this document.

What is a property?

A property defines a value associated with a name. Simply put, it is a key/value pair. The example shown in Listing 1 defines a property a point 6a and uses it at point 6b.

Properties are typically defined through property groups as shown at point 6a in Listing 1 but can also be defined from task output parameters as shown at point 8 in the same listing. Here the CalculateTimeseriesQuantile task has an output parameter named Quantile that is being transformed into a property – MyPropery – through the job Output element.

Working with Jobs

This section is aimed at answering some questions that often arise with regard to the designing and writing of jobs. The explanations take the outset in the job XML which, as already stated, does not imply that the user will work directly with the XML or need to understand XML. The user interface completely shields the XML for the user, but in order to design well-structured and well-behaved jobs a good understanding of the basic concepts, as best explained through the XML, is required.

How to specify the targets to execute

Specification of the target(s) within the job that shall be executed can be done in two ways:

  • From the job execution dialog. (See the section Execute a job further down in the document.)

  • From the Default attribute in the job’s Project element. (See Listing 4).

Listing 4 Specification of default targets

In both cases the target specification can include a comma-separated list and the targets will be executed in order from left to right.

How to sequentially order tasks

Tasks are always executed sequentially within a target on a first defined - first executed basis. It is possible to skip some of the tasks in the sequence, (not during execution) and redefining the order.

How to have targets depending on each other

Targets can be defined to have mutual dependencies, i.e. so that a target T1 depends on a target T2, in the sense that a job will automatically ensure that T2 is executed before T1.

Listing 5 below shows an example of this.

Listing 5 Target dependencies

Execution specifying target T1 as the target to execute will result in first executing target T3, then target T2 and finally target T1.

How to conditionally execute targets

A task may be omitted or included from execution due to a specific condition. The example in Listing 6 demonstrates such a situation – Task1 has an output parameter that governs the execution of the following two tasks in the target. Task2 shall only be executed in case the output parameter is larger than a certain threshold value and Task3 only if the parameter is smaller than the threshold value.

Listing 6 Task Condition processing instruction

It should be noted from the listing how the threshold value is defined at point 1 and ii used at point 3 and point 4. The Task1 output parameter is converted into a property at point 2. Also note the comparison operators used in the conditions strings at point 3 and 4 – ‘$gt;’ and ‘$le;’ are XML operators that compares to ‘>’ and ‘\<=’. The strange XML versions are necessary to distinguish them from XML elements. Users use the normal \<, >, \<= and >= versions when working from the user interface. The supported set of comparison operators are shown in Table 1 below.

Condition Description
'stringA' == 'stringB' Evaluates to true if stringA equals stringB. For example: Condition="'$(CONFIG)'=='DEBUG'" Single quotes are not required for simple alphanumeric strings or boolean values. However, single quotes are required for empty values.
'stringA' != 'stringB' Evaluates to true if stringA is not equal to stringB. For example: Condition="'$(CONFIG)'!='DEBUG'" Single quotes are not required for simple alphanumeric strings or boolean values. However, single quotes are required for empty values.
\<, >, \<=, >= Evaluates the numeric values of the operands. Returns true if the relational evaluation is true. Operands must evaluate to a decimal or hexadecimal number. Hexadecimal numbers must begin with "0x".
Exists('stringA') Evaluates to true if a file or folder with the name stringA exists. For example: Condition="!Exists('$(builtdir)')" Single quotes are not required for simple alphanumeric strings or boolean values. However, single quotes are required for empty values.
HasTrailingSlash('stringA') Evaluates to true if the specified string contains either a trailing backward slash (\) or forward slash (/) character. For example: Condition="!HasTrailingSlash('$(OutputPath)')" Single quotes are not required for simple alphanumeric strings or boolean values. However, single quotes are required for empty values.
! Evaluates to true if the operand evaluates to false.
And Evaluates to true if both operands evaluate to true.
Or Evaluates to true if at least one of the operands evaluates to true.
() Grouping mechanism that evaluates to true if expressions contained inside evaluate to true.

Table 1 Condition operators

The above solution works fine in cases where a single or just a few tasks within a target need to be conditionally executed. Where a larger number of tasks need to be executed conditionally, it is often more convenient to factor the conditional tasks out in a separate target and have that target executed through a special CallTarget task. This is demonstrated in Listing 7.

Listing 7 Conditional execution using CallTarget

Note how Target1 branches out to Target2 in case “some-condition” evaluates to true. After Target2 has finished, execution processing commences in Target1 with the task following the CallTarget task.

How to handle task output

Listing 1 includes a task – the CalculateTimeseriesQuantile task that has an output parameter. In order to make use of such an output parameter, the parameter needs to be converted into a job property and this conversion happens through the Output element which is also shown in the same listing.

The formal XML syntax for the Output element is shown below in Listing 8.

Listing 8 Output element

The TaskParameter attribute is used for establishing a reference to the task’s output parameter and the PropertyName defines the name of the job property that will receive the value of the output parameter.

A task can have multiple output parameters. An example of this is shown in Listing 9 below. The task – GetSimulationPeriod has two output parameters, the start and end date of a model simulation.

Listing 9 Task with two output parameters

Note in the example how the GetSimulationPeriod task has two Output child elements, where the first maps StartDate output parameter to a Begin property and the second maps the EndDate output parameter to an End property. The two generated properties are then used in the Message[^1] task.

[^1]: The Message task which prints the Text attribute on the screen is a useful task for debugging purposes when running the job from a console, but cannot be used when executing from within the user interface, as the output is not captured for displaying.

How to pass information from one target to another

Passing information from one task to another task is simple – this was demonstrated in Listing 1. Unfortunately it is not that simple to pass information from one target to another in situations involving CallTarget. Listing 10 below, which would usually be an intuitive way of handling this, does just not work.

Listing 10 Erroneous way to pass information between targets with CallTarget

The reason for this is that the properties generated within a target are not made global (visible for all targets) until the target has finished. This expansion rule implies that when chaining targets through the DependsOnTarget target attribute or through providing a list of targets for execution, most things work as expected, e.g. the example in Listing 11 works as expected when executing Target2Target2 depends on Target1 why this is executed in its entirety before Target2 actually starts execution and because Target1 has finished when Target2 starts, the Prop1 property is available for use within Target2.

Listing 11 Correct way to pass information between targets with static target dependency

This is all fine, but there is still the issue with CallTarget from Listing 10. There are existing workarounds for this and one of these involves writing the properties to a file in the calling target and then reading them from the called target. This is demonstrated in Listing 12.

Listing 12 Workaround for passing information between targets with CallTarget

Note how the calling target – Target1 writes the Prop1 property to a file which is then read by Target 2. It is not elegant but it gets the job done.

How to continue with a job despite failing tasks

The standard processing behaviour for a job is to completely terminate execution when a task execution fails. For example, in a job as the one shown in Listing 1, execution would terminate if, for instance, the CalculateTimeseriesQuantile task fails.

A task can fail for a number of different reasons, but typicaly an error situation occurs during task execution and the task itself cannot recover from the error. Again referring to Listing 1, in case the value of the Timeseries attribute was misspelled then the task would not be able to locate the time series and the task would have been put in a situation where it could not execute. The task execution would fail and the job processing thus terminated.

In some cases it might be undesirable to have a job terminated due to task execution failure. This can be avoided through the task attribute ContinueOnError – if this attribute is provided with the value true, then a possible task execution error for a specific task will be ignored. Listing 13 below shows an example of this.

Listing 13 Use of the ContinueOnError task processing instruction

If the CalculateTimeseriesQuantile task in the above listing fails during execution, the job execution will not terminate but simply continue with the WriteLinesToFile task. This is because the ContinueOnError attribute value has been defined as true. In the event that either the ImportTimeseries or the WriteLinesToFile tasks fail, the job execution will terminate. These tasks have not been decorated with a true value for their ContinueOnError attribute.

How to handle errors

Task execution errors can be handled in three different ways:

  1. Terminate the job – which is the default action

  2. Ignore the error – through the use of the ContinueOnError procession instruction. See section 3.7 How to have a job continue despite failing tasks?

  3. Catch the error in an OnError element.

The OnError element which looks very much like a task, simply transfers execution to another task. In other words, it acts very similar to the CallTarget task. Listing 14 below shows the XML format of the OnError element.

Listing 14 The OnError element XML format

Listing 15 depicts an example on how to use the OnError element.

Listing 15 Use of OnError

In the example above the Error task raises an error which is caught by the OnError element. The OnError element branches execution to the Target2 target. Note the following two aspects about the OnError element:

  1. Although The OnError element appears as a normal task, it is executed in context of the job and not in the context of the target it appears in. This implies that properties that have been defined within the failing target are globalized and thus available for use from within the target(s) that the OnError target branches to.

  2. A target can include multiple OnError elements and will in such cases be called in the order mentioned.

The Job Manager job editor treats the OnError as a task, see the The Job Editor View section.

Hints and best practices

This section provides more details with regard to different aspects relating to job authoring.

Use of job properties

Job properties are an integral part of designing and implementing jobs and have already been discussed at length in previous sections. This section summarises where job properties can be defined and applied.

Definition of job properties can take place:

  • In a PropertyGroup section and will thus have global scope within the job

  • Through a CreateProperty task and will have target scope until the target completes execution where after the property will become globally available

  • Through converting a task output parameter to a job property. This takes place through the Output element enclosed within a task definition

  • Through specification at execution time, either from the user interface executes and schedule dialogs (see sections Execute a job or Schedule a job) or through the JobRunner.exe command line (see the section on Debugging).

Use of job items

Occasionally a job needs to work with lists of information, e.g. lists of input data files or lists of time series. This is not easily achieved with the job constructs discussed so far. For assistance with this, we have job items with the following definitions:

  • Items are input to the job processing and typically represent lists of information values and are grouped into item types, based on their element names. Item types are named lists of items that can be used as parameters for tasks. The tasks use the item values to perform the steps of the job execution.

  • An ItemGroup is a collection of user-defined Item elements. Every item used in a job must be specified as a child of an ItemGroup element.

This explanation is perhaps a bit abstract but the example shown in Listing 16 will make this easier to understand. The example shows how to import all the time series files found in a folder to a time series group within the Workbench database.

Listing 16 Loop over files

Note the following in the listing:

  • At point 1 – a list named dfs0 of files is created. The *.dfs0 part of the Include statement implies all files with the .dfs0 file extension. Other wildcard possibilities comprise ‘?’ which matches a single character and ‘**’ which matches a partial path[^2]. It is also possible to exclude part of the files in the Include list through an Exclude attribute. Adding Exclude=”a*.dfs0” to the dfs0 item definition would result in importing all dfs0-files except those starting with an ‘a’.

    [^2]: E.g. Include=”c:\Data\**\*.dfs0” would result in all dfs0-files in all folders under c:\Data.

  • At point 2 – the same list is being used for importing the time series to the Workbench database. The ‘%’ character in %/dfs0.Identity) informs the job executor to call the ImportTimeseries task for each element in the dfs0 list and passing to it the identity of the element – the name.

Another operator other than ‘%’ operating on items is ‘\@’ which combines the items together in a semi-colon separated property value. This is demonstrated in Listing 17.

Listing 17 Batching items

Running the above job snippet will result in the output shown to the right of the job code. It can be seen in fact, that the first Message task has been executed three times and the second just once by combining the items into a single list.

In some cases it might be required to convert a property e.g. from a task output parameter to an item, in order to execute a task on each of the elements in the item. This can be performed through the CreateItem as depicted in Listing 18.

Listing 18 Conversion of a property to an item

This example defines a property – Test – as a semi-colon separated list of characters and at point 2 transforms it into an item. Note how the use of the CreateItem task requires an embedded Output element in order to capture the task output as an item.

The Job Manager provides a special task JobHelper for transforming individual elements from an item into named properties. An example of this is demonstrated in Listing 19 where the job uses the standard RunScript task for executing a custom script – MyScript. Now assume MyScript returns time series meta-data in the form of “min-value;max-value;mean-value;null-value-count” and the mean-value shall be used as input in a later task.

Listing 19 JobHelper example

Note in this example how:

  • At point 1 the RunScript has just one output parameter named Result. This example script (MyScript) returns a text string with a semi-colon separated list of values

  • At point 2 the JobHelper tasks convert the semi-colon separated output string from the RunScript task into an item collection

  • At point 3 the JobHelper tasks return the value of the item at index 2. The embedded Output element converts this to a property named MeanValue

  • At point 4 the mean value is printed.

See the section JobHelper task in Appendix A – Standard tasks for more detailed information on the JobHelper task.

Debugging

Creating jobs can be compared to writing software programs; sometimes the jobs function differently to what was anticipated. This can be due to tasks behaving differently than imagined, errors with passing information between tasks or targets or job properties are wrongly spelled or initialised. In such cases, it can often be beneficial to “debug” the jobs.

Debugging jobs is best performed from a command line outside the MIKE Workbench environment. Jobs are scheduled or always directly executed using the DHI.Solutions. JobManager.JobRunner.exe (hereinafter JobRunner) command line tool.

  • In the event that the job is executed directly from within the Job Manager, that component will spawn the JobRunner as a detached and hidden process.

  • If the job is executed through a scheduled process, the Windows Task Scheduler will execute the JobRunner according to the defined schedule.

Executing the JobRunner from a command line requires the user to:

  • Create a command line using e.g. Windows button+R, specify cmd.exe and click Open

  • Have the Windows PATH environment variable including the Workbench installation folder by issuing the following command path %PATH%; C:\Program Files (x86)\DHI\2019\MIKE OPERATIONS.

Now the JobRunner can be executed in the following way:

Listing 20 JobRunner command line

The command line arguments imply the following:

  • -c shall be followed by the connection name, i.e. the same connection name that is being used when logging onto the Workbench

  • -w shall be followed by the workspace name

  • -u shall be followed by the user name

  • -p –shall be followed by the password corresponding the specified user name

  • -j – shall be followed by a reference to the job that shall be executed.

    This can be provided in one of two ways:

  • If the job has been exported as an XML file, the reference can be the path to the file. E.g. –j c:\temp\Example.job

  • If the job resides in the database, the reference shall be specified as e.g. in –j dss://Example1

Other useful command line arguments include:

  • -a key-value, where key-value shall be specified as “MyKey=MyValue”. The JobRunner will convert these key/value pairs into job properties. The command line can take multiple –a arguments.

When debugging from the command line, it is often a good idea to insert a number of Message tasks with information like target name and value of various properties. Listing 21 shows a slightly changed version of Listing 1 where a number of Message tasks have been added and the original WriteLinesToFile task has been substituted by a Message task.

Listing 21 Changed version of Example1

The figure below shows the execution of this job from a command line.

Figure 2 Executing a job from a command line

The red highlighted text in the figure indicates the output from the three added Message tasks. Also note the log information printed for each of the tasks.

Date and time specification

Many of the tasks provided by the MIKE Workbench takes a time stamp as input parameter and in order to ensure that jobs can be shared across different computers with potential different regional settings – including date format, it is advisable to always use the so-called invariant format for dates. Listing 22 shows how to specify a time stamp according to the invariant format.

Listing 22 Invariant time stamp format.

The job Manager also supports the time stamp format defined through the regional setting on the computer but as already mentioned, it is advisable to use the invariant format.

There are a number of useful standard tasks for handling time stamps. These include:

  • MakeTimeStamp which can generate a time stamp based on an offset from “now”.

  • SetTimeStamp which can write a time stamp to a specified place in the registry or update a file’s last modified time

  • GetTimeStamp which can read a time stamp from a specified place in the registry or from a file’s last modified time.

User interface support

The description of jobs has mainly been based on what could be called a source code level – namely the job XML format. However, this is not the approach that most users will implement when working with jobs. The MIKE Workbench Job Manager provides an extensive user interface completely shielding the user from the intricacies of the XML format.

This section describes how to create, execute and schedule jobs from within the Job Manager User interface.

The Job Explorer

The Job Explorer which is depicted in Figure 3 is used for managing jobs in terms of opening jobs for editing, renaming and deleting. Jobs are also executed and scheduled from here.

Figure 3 Job explorer and job context menu

Figure 3 briefly describes the menu item actions.

Menu item Description
Open Opens the selected job for editing with the Job view editor
Refresh Refreshes the job tree
Delete Deletes the selected job and all schedules defined for it.
Rename Renames the selected job
Create a Schedule Sets up an execution schedule for the selected job in a specified job host.
Edit Schedule Edits existing job schedules on a specified job host.
Unschedule Unschedules the selected job in a selected job host.
Execute Executes the selected job. Note: a schedule is typically set-up for automatic scheduled execution while the Execute menu item is for direct one-time execution
Export Exports the selected job as an XML file
Import Imports an XML job file to the Workbench database

Table 2 Job menu items

Note: In order to create a new job, select the Create job menu item on the root node (Database) in the Explorer.

An executed job, scheduled as well as directly executed, always produces a log of the execution. These logs are available as job child nodes. In Figure 1 it can thus de deducted that Job-3 has been executed once and Job-1 7 times. The job log name is the time stamp of execution.

The Job Editor View

This view which is depicted in Figure 4 is used for editing jobs. As can be seen, it provides a tree- structured view with 3 types of root level nodes. These are:

  • A Properties node used for defining global job properties

  • An ItemGroup node used for defining global items

  • Target nodes

Figure 4 Job editor view

Note the following from the figure:

  • PrepareData at point 1 is a target including 15 tasks

  • CreateProperty at point 2 is a task including an Output element

  • Value at point 3 represents an Output element

  • Properties for any selected node can be edited from the Property control shown at point 5

  • The toolstrip shown at point 5 provides functionality for saving the job being edited, adding a new element to the job where the type of element depends on the current selected node, copying and pasting elements, moving elements up and down and finally deleting elements.

The adding of elements, properties, items, targets, tasks and output all takes place from the element parent node. This is done in the following way:

  • In order to add a new target: select the root node (representing the whole job) and click the Add button

  • In order to add a new property: select the Properties node and click the Add button

  • In order to add a new item: select the ItemGroup node and click the Add button

  • In order to add a new task: select the relevant target node and click the Add button

  • In order to add a new output element: select the relevant task and click the Add button.

When clicking the Add button for adding a task, a task selection form as shown in Figure 5, appears.

Figure 5 Task selection form

As can be seen from the figure, tasks are categorized according to functionality area, scenarios and time series etc. The latter task category – MSBuild[^3] Task includes a number of general purpose tasks which are not Workbench specific but rather deal with file handling and job building functionalities.

[^3]: The word MSBuild tasks originates from the base technology used for the MIKE Workbench job implementation – the Microsoft MSBuild technology.

This includes tasks like CallTarget, OnError, Message, WriteLinesToFile, ReadLinesFromFile etc.

The Job Instance Log

Whenever a job is executed, a job instance log is being generated. The log includes status of each executed task including its input, output, processing time and memory usage. An example of such a log is displayed in Figure 6.

Figure 6 Job instance log

The log displayed in the figure comes from an execution of the job from Listing 1 sample job source. Note in the figure how the log displays a green icon indicating that the task executed without errors, how each task execution is logged in three sections, namely Information with an overall status of the execution, Properties with a listing of all the task input and out parameters with values as well as a Log section which includes various task specific log information.

Note: The log is being created and updated in the database as the job progresses with the task execution. At the same time the Job explorer is being notified about the changes to the job log. This is in order to refresh the job user interface with the latest information on the executing jobs. This notification however, relies on the MIKE Workbench Event Manager being active. Should this not be the case, the job user interface will not be updated as the job is being executed.

Execute a job

Job execution takes place from the Job Explorer context menu by selecting the Execute… menu item (see Figure 3) which will lead to the Job Execution form shown in Figure 7.

Figure 7 Execute dialog.

The user will have to specify:

  • The name of the computer – Job host - that will host the simulation. Job host shall be defined as described, before they can be used for executing jobs.

  • The target or targets from the job file that shall be executed.

  • Optionally, specify if the job shall have a maximum allowed execution time (after which it will be killed)

The Settings tab which is shown below in Figure 8 is used for defining job properties. They are key=value type settings for the job, allowing the user to specify a spcific value of a job property for the specific execution of the job (overriding an eventual value set in the job).

Figure 8 Job property definition

Schedule a job

Instead of directly executing a job as described in the previous section, a job can be scheduled for execution. This happens very similar to direct execution, through selection of the Create a schedule…menu item on the Job context menu. Should an existing schedule be changed, the user would select the Edit schedule… menu item.

When creating a schedule the user will select the job host to schedule the job in and then Add the schedule, see Figure 9 Create a Schedule. It is only possible to add schedules to jobhosts which do not already have a schedule for the job. To add new schedules for a job already scheduled in jobhost, use Edit Schedule.

Figure 9 Create a Schedule

A schedule is a specification of how to run the job in a job host at predefined times. Times are give by one or more triggers, which each can be recursive.

Figure 10 Add Schedule - General

Adding a schedule, the user will be presented with three tab-pages – the same two as used with the direct execution as well as a Triggers tab page. The latter is depicted in Figure 11 below.

Figure 11 Triggers

Users will apply triggers, which are date and time specifications, for defining when the execution shall take place. Executions can be defined as single or recurrent executions, for example every three hours.

Disable a scheduled job

Scheduled executions can be cancelled through the Unschedule execution… menu item in the Job context menu.

Note that in cases where a job has been scheduled for execution on multiple job hosts, the user will be prompted as to which of the schedules should be cancelled.

Defining job hosts

Remote hosts are computers that can run simulations and jobs as background processes, i.e. processes that are separate from the application process that a user works with when defining and starting the execution run.

Configuration of a remote host requires the following information:

  • The name or IP address of the computer

  • The Job Service port, default values is 8089

If no job host is defined, the system will default to “localhost”. This is only feasible in case of a single-machine setup where MIKE OPERATIONS and the database is in the same machine.

In systems with multiple servers, named jobhosts should be defined. It will make it clear which machine has the schedule and allow cross-machine scheduling where a client can schedule any job in any machine.

Figure 12 Remote host dialog, defining 5 remote hosts

When clicking OK, the name and connection to the job hosts are validated.

Accessing Remote host details

The Job Service in a remote job host may serve multiple MIKE OPERATION databases. Job Explorer only displays information about jobs in the current database.

The Remote Host dialog (see Figure 12) enable direct communication with the Job Service and displays details of all the jobs in the Job Service. It also provides possibility to control the Job Service.

Job details

Selecting a job host and clicking the “Schedule” button will display all job schedules in the Job Service of the remote host. See

Figure 13 Job Service schedules

From this dialog is is possible to inspect the triggers of a job using the “Show Triggers” button (see Figure 14) as well as “Unschedule” a job (select the whole line in the list) – even if it is belonging to a job defined in another database (see Figure 15)

Figure 14 Job triggers

Figure 15 Unschedule in Job Service

Job Service Control

The “Settings…” button in the Remote hosts dialog opens a small dialog from where it is possible to control the Job Service, see Figure 16.

Figure 16 Controlling Job Service

The options are

  • Active / Pause
    Active is the normal running mode of the Job Serivce. In this mode, it will trigger jobs to be run. Puse will suspend triggering of jobs. This can be useful if extraordinary maintenance of the database is needed without new jobs starting and connecting to it.

  • Stardard / Verbose logging will instruct how much the Job Service will write to the JobExecution*.log files that it writes. Verbose is useful if extended debugging of behaviour is needed.
    The log files are usually written in C:\Windows\temp\DHIDSS

  • Reload. Clicking this button will instruct the Job Service to reload the schedules, thus resetting the schedule details in memory. The schedules are kept in C:\ProgramData\DHI\JobSchedules.xml