Promtutorialv2 PDF
Promtutorialv2 PDF
Authors:
February 2008
ii
1 Introduction 1
1.1 Common Questions . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Process Mining . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Running Example . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5 Conclusions 45
iv CONTENTS
Chapter 1
Introduction
This document shows how to use the ProM tool to answer some of the com-
mon questions that managers have about processes in organizations. The
questions are listed in Section 1.1. To answer these questions, we use the
process mining plug-ins supported in the ProM tool. This tool is open-
source and it can be downloaded at www.processmining.org. For the reader
unfamiliar with process mining, Section 1.2 provides a concise introduction.
All the questions listed in Section 1.1 are answered based on an event log
from the running example described in Section 1.3. Finally, we advice you
to have the ProM tool at hand while reading this document. This way you
can play with the tool while reading the explanations. Section 1.4 explains
how to get started with ProM.
6. Which paths take too much time on average? How many cases follow
these routings? What are the critical sub-paths for these paths?
7. What is the average service time for each task?
8. How much time was spent between any two tasks in the process model?
9. How are the cases actually being executed?
10. What are the business rules in the process model?
11. Are the rules indeed being obeyed?
12. How many people are involved in a case?
13. What is the communication structure and dependencies among people?
14. How many transfers happen from one role to another role?
15. Who are important people in the communication flow? (the most fre-
quent flow)
16. Who subcontract work to whom?
17. Who work on the same tasks?
We show how to use ProM to answer these questions in chapters 3 and 4.
Send
Reminder
Figure 1.1: Petri net illustrating the control-flow perspective that can be
mined from the event log in Table 1.1.
perspective can be mined. The log in Table 1.1 has this data (cf. fields “Case
ID”, “Task Name” and “Timestamp”). So, for this log, mining algorithms
could discover the process in Figure 1.11 . Basically, the process describes
that after a fine is entered in the system, the bill is sent to the driver. If
the driver does not pay the bill within one month, a reminder is sent. When
the bill is paid, the case is archived. If the log provides information about
the persons/systems that executed the tasks, the organizational perspective
can be discovered. The organizational perspective discovers information like
the social network in a process, based on transfer of work, or allocation rules
linked to organizational entities like roles and units. For instance, the log
in Table 1.1 shows that “Anne” transfers work to both “Mary” (case 2) and
“John” (cases 3 and 4), and “John” sometimes transfers work to “Mary”
(case 4). Besides, by inspecting the log, the mining algorithm could discover
that “Mary” never has to send a reminder more than once, while “John”
does not seem to perform as good. The managers could talk to “Mary”
and check if she has another approach to send reminders that “John” could
benefit from. This can help in making good practices a common knowledge
in the organization. When the log contains more details about the tasks,
like the values of data fields that the execution of a task modifies, the case
perspective (i.e. the perspective linking data to cases) can be discovered. So,
for instance, a forecast for executing cases can be made based on already
completed cases, exceptional situations can be discovered etc. In our par-
ticular example, logging information about the profiles of drivers (like age,
gender, car etc.) could help in assessing the probability that they would pay
their fines on time. Moreover, logging information about the places where
the fines were applied could help in improving the traffic measures in these
places. From this explanation, the reader may have already noticed that
the control-flow perspective relates to the “How?” question, the organiza-
tional perspective to the “Who?” question, and the case perspective to the
“What?” question. All these three perspectives are complementary and
relevant for process mining, and can be answered by using the ProM tool.
The ProM framework [4, 11] is an open-source tool specially tailored to
support the development of process mining plug-ins. This tool is currently
at version 4.2 and contains a wide variety of plug-ins. Some of them go
beyond process mining (like doing process verification, converting between
different modelling notations etc). However, since in this tutorial our focus
is to show how to use ProM plug-ins to answer common questions about
processes in companies (cf. Section 1.1), we focus on the plug-ins that use
as input (i) an event log only or (ii) an event log and a process model.
1
The reader unfamiliar with Petri nets is referred to [7, 9, 10].
1.3 Running Example 5
supports/
“world” controls
business processes information
people machines system
components
organizations records
events, e.g.,
messages,
specifies
models transactions,
configures
analyzes etc.
implements
analyzes
discovery
(process) event
conformance
model logs
extension
Figure 1.2: Sources of information for process mining. The discovery plug-ins
use only an event log as input, while the conformance and extension plug-ins
also need a (process) model as input.
Figure 1.2 illustrates how these plug-ins can be categorized. The plug-ins
based on data in the event log only are called discovery plug-ins because
they do not use any existing information about deployed models. The plug-
ins that check how much the data in the event log matches the prescribed
behavior in the deployed models are called conformance plug-ins. Finally,
the plug-ins that need both a model and its logs to discover information
that will enhance this model are called extension plug-ins. In the context
of our common questions, we use (i) discovery plug-ins to answer questions
like “How are the cases actually being executed? Are the rules indeed being
obeyed?”, (ii) conformance plug-ins to questions like “How compliant are the
cases (i.e. process instances) with the deployed process models? Where are
the problems? How frequent is the (non-)compliance?”, and (iii) extension
plug-ins to questions like “What are the business rules in the process model?”
ment. There it is analyzed and its defect is categorized. In total, there are
10 different categories of defects that the phones fixed by this company can
have. Once the problem is identified, the telephone is sent to the Repair
department and a letter is sent to the customer to inform him/her about the
problem. The Repair (R) department has two teams. One of the teams can
fix simple defects and the other team can repair complex defects. However,
some of the defect categories can be repaired by both teams. Once a repair
employee finishes working on a phone, this device is sent to the Quality As-
surance (QA) department. There it is analyzed by an employee to check if
the defect was indeed fixed or not. If the defect is not repaired, the telephone
is again sent to the Repair department. If the telephone is indeed repaired,
the case is archived and the telephone is sent to the customer. To save on
throughput time, the company only tries to fix a defect a limited number of
times. If the defect is not fixed, the case is archived anyway and a brand
new device is sent to the customer.
Question Section
How many cases (or process instances) are in the log?
How many tasks (or audit trail entries) are in the log?
How many originators are in the log? 2.1
Are there running cases in the log?
Which originators work in which tasks?
How can I filter the log so that only completed cases are
kept? 2.2
How can I see the result of my filtering?
How can I save the pre-processed log so that I do not have
to redo work?
tabu.tm.tue.nl/wiki/ media/tutorial/repairexample.zip.
This log has process instances of the running example described in Sec-
tion 1.3.
To open this log, do the following:
1. Download the log for the running example and save it at your computer.
2. Start the ProM framework. You should get a screen like the one in
Figure 2.1. Note that the ProM menus are context sensitive. For
instance, since no log has been opened yet, no mining algorithm is
available.
3. Open the log via clicking Open→Open MXML Log file, and select your
saved copy of the log file for the running example. Once your log is
opened, you should get a screen like the one in Figure 2.2. Note that
now more menu options are available.
Now that the log is opened, we can proceed with the actual log inspection.
Recall that we want to answer the following questions:
1. How many cases (or process instances) are in the log?
2. How many tasks (or audit trail entries) are in the log?
3. How many originators are in the log?
4. Are there running cases in the log?
5. Which originators work in which tasks?
The first four questions can be answered clicking on the tab Summary or
by calling the analysis plug-in Log Summary. To call this plug-in, choose
Analysis→[log name...]→Log Summary. Can you now answer the first four
questions of the list on the bottom of page 8? If so, you probably have
noticed that this log has 104 running cases and 1000 completed cases. You
2.1 Inspecting the Log 9
Figure 2.1: Screenshot of the main interface of ProM. The menu File allows
to open event logs and to import models into ProM. The menu Mining pro-
vides the mining plug-ins. These mining plug-ins mainly focus on discovering
information about the control-flow perspective of process models or the so-
cial network in the log. The menu Analysis gives access to different kinds
of analysis plug-ins for opened logs, imported models and/or mined models.
The menu Conversion provides the plug-ins that translate between the dif-
ferent notations supported by ProM. The menu Exports has the plug-ins to
export the mined results, filtered logs etc.
10 Inspecting and Cleaning an Event Log
Figure 2.2: Screenshot of the ProM after the log of the running example (cf.
Section 1.3) has been opened.
2.2 Cleaning the Log 11
see that from the information in the table “Ending Log Events” of the log
summary (cf. Figure 2.3). Note that only 1000 cases end with the task
“Archive Repair”. The last question of the list on the bottom of page 8 can
be answered by the analysis plug-in Originator by Task Matrix. This plug-
in can be started by clicking the menu Analysis→[log name...]→Originator
by Task Matrix. Based on the result in Figure 2.4, can you identify which
originators perform the same tasks for the running example log? If so, you
probably have also noticed that there are 3 people in each of the teams in
the Repair department (cf. Section 1.3) of the company1 . The employees
with login “SolverC. . . ” deal with the complex defects, while the employees
with login “SolverS. . . ” handle the simple defects.
Take your time to inspect this log with these two analysis plug-ins and
find out more information about it. If you like, you can also inspect the
individual cases by first clicking the tab Inspector (cf. third left tab on
Figure 2.2), then clicking on the top tab Preview, and finally double-clicking
on a specific process instance.
Figure 2.3: Excerpt of the result of clicking on the tab “Summary”, which
presents an overview of data in a log.
Figure 2.4: Scheenshot with the result of the analysis plug-in Originator by
Task Matrix.
2.2 Cleaning the Log 13
event type. The Start events filters the log so that only the traces (or cases)
that start with the indicated tasks are kept. The End Events works in a
similar way, but the filtering is done with respect to the final tasks in the log
trace. The Event filter is used to set which events to keet in the log.
From the description of our running example, we know that the completed
cases are the ones that start with a task to register the phone and end with
a task to archive the instance. Thus, to filter the completed cases, you need
to execute the following procedure:
1. Include the event types selection as in Figure 2.2. I.e., “keep” all the
complete and start event types;
2. Select the task “Register (complete)” as the start event;
3. Select the task “Archive Repair (complete)” as the final event.
If you now inspect the log (cf. Section 2.1), for instance, by clicking on the
left tab Summary, you will notice that the log contains fewer cases (Can
you say how many?) and all the cases indeed start with the task “Register
(complete)” and finish with the task “Archive Repair (complete)”.
Although the log filters we have presented so far are very useful, they
have some limitations. For instance, you cannot rename tasks (or events) in
a log. For reasons like this, the advanced tab of the panel with the log (cf.
Figure 2.5) provides more powerful log filters. Each log filter has a help, so
we are not going into details about them. However, we strongly advise you
to spent some time trying them out and getting more feeling about how they
work. Our experience shows that the advanced log filters are especially useful
when handling real-life logs. These filters not only allow for projecting data
in the log, but also for adding data to the log. For instance, the log filters
Add Artificial Start Task and Add Artificial End Task support the respective
addition of tasks in the beginning and ending of traces. These two log filters
are handy when applying process mining algorithms the assume the target
model to have a single start/end point.
Once you are done with the filtering, you can save your results in two
ways:
1. Export the filtered log by choosing the export plug-in MXML log file.
This will save a copy of the log that contains all the changes made by
the application of the log filters.
2. Export the configured log filters themselves by choosing the export
plug-in Log Filter (advanced). Exported log filters can be imported
into ProM at a later moment and applied to a (same) log. You can
import a log filter by selecting File→Open Log Filter (advanced).
14 Inspecting and Cleaning an Event Log
If you like, you can export the filtered log for our running example. Can you
open this exported log into ProM? What do you notice by inspecting this
log? Note that your log should only contain 1000 cases and they should all
start and end with a single task.
16 Inspecting and Cleaning an Event Log
Chapter 3
Now that you know how to inspect and pre-process an event log (cf. Chap-
ter 2), we proceed with showing how to answer the questions related to the
discovery ProM plug-ins (cf. Figure 1.2). Recall that a log is the only input
for these kinds of plug-ins.
The questions answered in this chapter are summarized in Table 2.1.
Section 3.1 shows how to mine the control-flow perspective of process mod-
els. Section 3.2 explains how to mine information regarding certain as-
pects of cases. Section 3.3 describes how to mine information related to the
roles/employees in the event log 1 . Section 3.4 shows how to use temporal
logic to verify if the cases in a log satisfy certain (required) properties.
Question Section
How are the cases actually being executed? 3.1
What is the most frequent path for every process model?
3.2
How is the distribution of all cases over the different paths
through the process?
How many people are involved in a case?
What is the communication structure and dependencies
among people?
How many transfers happen from one role to another role? 3.3
Who are the important people in the communication flow?
Who subcontract work to whom?
Who work on the same tasks?
Are the rules indeed being obeyed? 3.4
mining plug-in Alpha algorithm plugin. Thus, to mine the log of our running
example, you should perform the following steps:
1. Open the filtered log that contains only the completed cases (cf. Sec-
tion 2.2), or redo the filtering for the original log of the running exam-
ple.
2. Verify with the analysis plug-in Log Summary if the log is correctly
filtered. If so, this log should contain 1000 process instances, 12 audit
trail entries, 1 start event (“Register”), 1 end event (“Archive Repair”),
and 13 originators.
3. Run the Alpha algorithm plugin by choosing the menu Mining→[log
name...]→Alpha algorithm plugin (cf. Figure 3.1).
4. Click the button start mining. The resulting mined model should look
like the one in Figure 3.2. Note that the Alpha algorithm plugin uses
Petri nets2 as its notation to represent process models. From this mined
model, you can observe that:
• All cases start with the task “Register” and finish with the task
“Archive Repair”. This is not really surprising since we have fil-
tered the cases in the log.
2
Different mining plug-ins can work with different notations, but the main idea is
always the same: portray the dependencies between tasks in a model. Furthermore, the
fact that different mining plug-ins may work with different notations does not prevent the
interoperability between these representations because the ProM tool offers conversion
plug-ins that translate models from one notation to another.
3.1 Mining the Control-Flow Perspective 19
• After the task Analyze Defect completes, some tasks can occur in
parallel: (i) the client can be informed about the defect (see task
“Inform User”), and (ii) the actual fix of the defect can be started
by executing the task Repair (Complete) or Repair (Simple).
• The model has a loop construct involving the repair tasks.
Based on these remarks, we can conclude that the cases in our running
example log have indeed been executed as described in Section 1.3.
5. Save the mined model by choosing the menu option Exports→Selected
Petri net→Petri Net Kernel file. We will need this exported model in
Chapter 4.
6. If you prefer to visualize the mined model in another representation,
you can convert this model by invoking one of the menu option Conver-
sion. As an example, you can convert the mined Petri net to an EPC
by choosing the menu option Conversion→Selected Petri net→Labeled
WF net to EPC.
As a final note, although in this section we mine the log using the Alpha
algorithm plugin, we strongly recommend you to try other plug-ins as well.
The main reason is that the Alpha algorithm plugin is not robust to logs that
contain noisy data (like real-life logs typically do). Thus, we suggest you
have a look at the help of the other plug-ins before choosing for a specific
one. In our case, we can hint that we have had good experience while using
the mining plug-ins Multi-phase Macro plugin, Heuristics miner and Genetic
algorithm plugin to real-life logs.
21
22 Questions Based on an Event Log Only
23
are on the left side, the sequence diagrams are on the middle and the patterns frequence and throughput times are
on the right side.
24 Questions Based on an Event Log Only
instance, to check which people are involved in the process instance 120 of
our example log, you can do the following:
1. Open the filtered log (cf. Section 2.2) for the running example.
2. Click on the left tab Inspector.
3. Click on the top tab Preview.
4. Right-click on the panel Process Instance and click on Find....
5. In the dialog Find, field “Text to find”, type in 120 and click “OK”.
This option highlights the process instance in the list.
6. Double-click the process instance 120.
7. Visualize the log summary for this process instance by choosing the
menu option Analysis→Previewed Selection. . . →Log Summary.
You can see who work on the same tasks by using the analysis plug-in Origi-
nator by Task Matrix, or by running the mining plug-in Organizational Miner.
For instance, to see the roles that work for the same tasks in our example
log, you can do the following:
1. Open the filtered log (cf. Section 2.2) for the running example.
2. Select the menu option Mining→Filtered. . . →Organizational Miner,
choose the options “Doing Similar Task” and “Correlation Coefficient”,
and click on start mining.
3. Select the tab Organizational Model. You should get a screen like the
one in Figure 3.4. Take you time to inspect the information provided
at the bottom of this screen. Noticed that the automatically generated
organizational model shows that the people with the role “Tester. . . ”
work on the tasks “Analyze Defect” and “Test Repair”, and so on.
If you like, you can edit these automatically generated organizational
model by using the functionality provided at the other two tabs Tasks<-
>Org Entity and Org Entity<->Resource. Note that organizational
models can be exported and used as input for other organizational-
related mining and analysis plug-ins.
The other remaining questions of the list on page 22 are answered by using
the mining plug-in Social Network in combination with the analysis plug-in
Analyze Social Network. For instance, in the context of our running example,
we would like to check if there are employees that outperform others. By
identifying these employees, one can try to make the good practices (or way
of working) of these employees a common knowledge in the company, so
that peer employees also benefit from that. In the context of our running
example, we could find out which employees are better at fixing defects.
3.3 Mining Organizational-Related Information
Figure 3.4: Scheenshot of the mining plug-in Organizational Miner.
25
26 Questions Based on an Event Log Only
From the process description (cf. Section 1.3) and from the mined model in
Figure 3.2, we know that telephones which were not repaired are again sent
to the Repair Department. So, we can have a look on the handover of work
for the task performed by the people in this department. In other words,
we can have a look on the handover of work for the tasks Repair (Simple)
and Repair (Complex). One possible way to do so is to perform the following
steps:
1. Open the log for the running example.
2. Use the advanced log filter Event Log Filter (cf. Section 2.2) to filter
the log so that only the task “Repair (Simple) (start)”, “Repair (Sim-
ple) (complete)”, “Repair (Complex) (start)” and “Repair (Complex)
(complete)” are kept. (Hint: Use the analysis plug-in Log Summary to
check if the log is correctly filtered!).
3. Run the Social Network Miner by choosing the menu option Mining→
Filtered. . . →Social network miner.
4. Select the tab Handover of work, and click the button start mining.
You should get a result like the one in Figure 3.5. We could already
analyze this result, but we will use the analysis plug-in Analyze Social
Network to do so. This analysis plug-in provides a more intuitive user
interface. This is done on the next step.
5. Run the Analyze Social Network by choosing the menu option Analysis→
SNA→Analyze Social Network. Select the options “Vertex size”, “Ver-
tex degree ratio stretch” and set Mouse Mode to “Picking” (so you
can use the mouse to re-arrange the nodes in the graph). The result-
ing graph (cf. Figure 3.6) shows which employees handed over work
to other employees in the process instances of our running example.
By looking at this graph, we can see that the employees with roles
“SolverS3” and “SolverC3” outperform the other employees because
the telephones these two employees fix always pass the test checks and,
therefore, are not re-sent to the Repair Department (since no other em-
ployee has to work on the cases involving “SolverS3” and “SolverC4”).
The oval shape of the nodes in the graph visually expresses the rela-
tion between the in and out degree of the connections (arrows) between
these nodes. A higher proportion of ingoing arcs lead to more vertical
oval shapes while higher proportions of outgoing arcs produce more
horizontal oval shapes. From this remark, can you tell which employee
has more problems to fix the defects?
Take you time to experiment with the plug-ins explained in the procedure
above. Can you now answer the other remaining questions?
3.3 Mining Organizational-Related Information 27
As a final remark, we point out that the results produced by the Social
Network mining plug-in can be exported to more powerful tools like AGNA4
and NetMiner5 , which are especially tailored to analyze social networks and
provide more powerful user interfaces.
7
For Windows users, please see Start→Programs→Documentation→All
Documentation→LTLChecker-Manual.pdf.
3.4 Verifying Properties
31
Figure 3.7: Scheenshot of the analysis plug-in Default LTL Checker Plugin.
32 Questions Based on an Event Log Only
Chapter 4
In this section we explain the ProM analysis plug-in that are used to answer
the questions in Table 4.1. These plug-ins differ from the ones in Chapter 3
because they require a log and a (process) model as input (cf. Figure 1.2).
Section 4.1 explains a conformance ProM plug-in that detects discrepancies
between the flows prescribed in a model and the actual process instances
(flows) in a log. Sections 4.2 and 4.3 describe extension ProM plug-ins that
respectively extend the models with performance characteristics and business
rules.
Question Section
How compliant are the cases (i.e. process instances) with the 4.1
deployed process models? Where are the problems? How
frequent is the (non-) compliance?
What are the routing probabilities for each slipt/join task?
What is the average/minimum/maximum throughput time
4.2
of cases?
Which paths take too much time on average? How many
cases follow these routings? What are the critical sub-paths
for these routes?
What is the average service time for each task?
How much time was spent between any two tasks in the
process model?
What the the business rules in the process model? 4.3
used during the log replay). The log perspective indicates the points of
non-compliant behavior for every case in the log.
Take your time to have a look at the results. Can you tell how many traces
are not compliant with the log? What are the problems? Have all the devices
been tested after the repair took places? Is the client always being informed?
Figure 4.1: Scheenshot of the analysis plug-in Conformance Checker : Model view.
36
4.2 Performance Analysis
Figure 4.2: Scheenshot of the analysis plug-in Conformance Checker : Log view.
37
38 Questions Based on a Process Model Plus an Event Log
shows (i) the bottlenecks (notice the different colors for the places) and
(ii) the routing probabilities for each split/join tasks (for instance, note
that only in 27% of the cases the defect could not the fixed on the
first attempt). The bottom panel shows information about the waiting
times in the places. You can also select one or two tasks to respectively
check for average service times and the time spent between any two tasks
in the process model. If you like, you can also change the settings for
the waiting time (small window at the bottom with High, Medium and
Low ).
The previous procedure showed how to answer all the questions listed in the
beginning of this section, except for one: Which paths take too much time
on average? How many cases follow these routings? What are the critical
sub-paths for these routes? To answer this last question, we have to use the
analysis plug-in Performance Sequence Diagram Analysis (cf. Section 3.2) in
combination with the Performance Analysis with Petri net. In the context
of our example, since the results in Figure 4.3 indicate that the cases take on
average 1.11 hours to be completed, it would be interesting to analyze what
happens for the cases that take longer than that. The procedure to do so
has the following steps:
1. If the screen with the results of the Performance Analysis with Petri
net plug-in is still open, just choose the menu option Analysis→Whole
Log→Performance Sequence Diagram Analysis. Otherwise, just open
the filtered log for the running example and run Step 2 described on
page 22.
2. In the resulting screen, select the tab Pattern Diagram, set Time sort
to hours, and click on the button Show diagram.
3. Now, click on the button Filter Options to filter the log so that only
cases with throughput time superior to 1.11 hours are kept.
4. Select the option Sequences with throughput time, choose “above” and
type in “1.1” in the field “hours”. Afterwards, click first on the button
Update and then on button Use Selected Instances. Take your time to
analyze the results. You can also use the Log Summary analysis plug-
ins to inspect the Log Selection. Can you tell how many cases have
throughput time superior to 1.1 hours? Note that the Performance
Sequence Diagram Analysis plug-in shows how often each cases hap-
pened in the log. Try playing with the provided options. For instance,
what happens if you now select the Component Type as “Originator”?
Can you see how well the employees are doing? Once you have the log
selection with the cases with throughput time superior to 1.1 hour, you
4.2 Performance Analysis
39
Figure 4.3: Scheenshot of the analysis plug-in Performance Analysis with Petri net.
40 Questions Based on a Process Model Plus an Event Log
can check for critical sub-paths by doing the remaining steps in this
procedure.
5. Open exported PNML model that you created while executing the pro-
cedure on page 20, but this time link it to the selected cases by choosing
the menu option File→Open PNML file→With:Log Selection. If nec-
essary, change the automatically suggested mappings and click on the
button OK. Now the imported PNML model is linked to the process
instances with throughput times superior to 1.1 hours.
6. Run the analysis plug-in Performance Analysis with Petri net to dis-
cover the critical sub-paths for these cases. Take your time to analyse
the results. For instance, can you see that now 43% of the defects could
not be fixed on the first attempt?
Finally, we suggest you spend some time reading the Help documentation
of this plug-in because it provides additional information to what we have
explained in this section. Note that the results of this plug-in can also be
exported.
4
From the problem description (cf. Section 1.3), we know that there are 10 types of
defects.
Questions Based on a Process Model Plus an Event Log
Figure 4.4: Scheenshot of the analysis plug-in Decision Point Analysis: Result tab.
42
4.3 Decision Point Analysis
Figure 4.5: Scheenshot of the analysis plug-in Decision Point Analysis: Decision Tree/Rules tab.
43
44 Questions Based on a Process Model Plus an Event Log
Chapter 5
Conclusions
This tutorial showed how to use the different ProM plug-ins to answer com-
mon questions about process models (cf. Section 1.1). Since our focus was
on this set of question, we have not covered many of the other plug-ins that
are in ProM. We hope that the subset we have shown in this tutorial will help
you in finding your way in ProM. However, if you are interested, you could
have a further look in plug-ins to verify (process) models and detect potential
problems (by using the analysis plug-ins Check correctness of EPC, Woflan
Analysis or Petri net Analysis), quantify (from 0% until 100%) how much
behavior two process models have in common with respect to a given even log
(by using the analysis plug-in Behavioral Precision/Recall ), create simulation
models with the different mined perspectives (by using the export plug-in CPN
Tools) etc. The ProM tool can be downloaded at www.processmining.org.
46 Conclusions
Bibliography