Conformance checking
Business process conformance checking (a.k.a. conformance checking for short) is a family of process mining techniques to compare a process model with an event log of the same process.[1] It is used to check if the actual execution of a business process, as recorded in the event log, conforms to the model and vice versa.
For instance, there may be a process model indicating that purchase orders of more than one million euros require two checks. Analysis of the event log will show whether this rule is followed or not.
Another example is the checking of the so-called “four-eyes” principle stating that particular activities should not be executed by one and the same person. By scanning the event log using a model specifying these requirements, one can discover potential cases of fraud. Hence, conformance checking may be used to detect, locate and explain deviations, and to measure the severity of these deviations.[2]
Overview
[edit]Conformance checking techniques take as input a process model and event log and return a set of differences between the behavior captured in the process model and the behavior captured in the event log. These differences may be represented visually (e.g. overlaid on top of the process model) or textually as lists of natural language statements (e.g., activity x is executed multiple times in the log, but this is not allowed according to the model). Some techniques may also produce a normalized measures (between 0 and 1) indicating to what extent the process model and the event log match each other.
The interpretation of non-conformance depends on the purpose of the model:
- If the model is intended to be descriptive, discrepancies between model and log indicate that the model needs to be improved to capture reality better.
- If the model is normative, then such discrepancies may be interpreted in two ways: they may expose undesirable deviations (i.e., conformance checking signals the need for a better control of the process). or may reveal desirable deviations (i.e., workers may deviate to serve the customers better or to handle circumstances not foreseen by the process model).
Techniques
[edit]The purpose of conformance checking is to identify two types of discrepancies:
- Unfitting log behavior: behavior observed in the log that is not allowed by the model.
- Additional model behavior: behavior allowed in the model but never observed in the log.
There are broadly three families of techniques for detecting unfitting log behavior: replay, trace alignment and behavioral alignment.
In replay techniques,[3] each trace is replayed against the process model one event at a time. When a replay error is detected, it is reported and a local correction is made to resume the replay procedure. The local correction may be for example to skip/ignore a task in the process model or to skip/ignore an event in the log.
A general limitation of replay methods is that error recovery is performed locally each time that an error is encountered. Hence, these methods might not identify the minimum number of errors that can explain the unfitting log behavior. This limitation is addressed by trace alignment techniques.[4] These latter techniques identify, for each trace in the log, the closest corresponding trace that can be parsed by the model. Trace alignment techniques also compute an alignment showing the points of divergence between these two traces. The output is a set of pairs of aligned traces. Each pair shows a trace in the log that does not match exactly a trace in the model, together with the corresponding closest trace(s) produced by the model.
Trace alignment techniques do not explicitly handle concurrent tasks nor cyclic behavior (repetition of tasks). If for example four tasks can occur only in a fixed order in the process model (e.g. [A, B, C, D]), but they can occur concurrently in the log (i.e. in any order), this difference cannot directly detected by trace alignment, because it cannot be observed at the level of individual traces.
Other methods to identify additional behavior are based on negative events .[5] These methods start by enhancing the traces in the log by inserting fake (negative) events in all or some traces of the log. A negative event is inserted after a given prefix of a trace if this event is never observed preceded by that prefix anywhere in the log.
For example, if event C is never observed after prefix AB, then C can be inserted as a negative event after AB. Thereafter, the log enhanced with negative events is replayed against the process model. If the process model can replay the negative events, it means that there is behavior captured in the process model that is not captured in the log (since the negative events correspond to behavior that is never observed in the log).
Notable algorithms
[edit]Comparing footprint matrices
[edit]Footprint matrices display the causal dependency of two activities in an event log, e.g., if in an event log, activity a is followed by activity b in all traces but activity b is never followed by b.[6] Toward this kind of dependency, a list of ordering relations is declared:
Let L be an event log associated with the list A of all activities. Let a, b be two activities in A.
- a ᐳL b if and only if there is a trace σ in L, in which the pattern (a, b) occurs.
- a →L b if and only if a ᐳL b and not b ᐳL a.
- a #L b if and only if not a ᐳL b and not b ᐳL a.
- a ||L b if and only if a ᐳL b and b ᐳL a.
For a process model, such a matrix can also be derived on top of the execution sequences by using the play-out technique. Therefore, based on the footprint matrices, one can reason that if an event log conforms with a regarded process model, the two footprint matrices representing the log and the model are identical, i.e., the behaviors recorded in the model (in this case is the causal dependency) appear at least once in the event log.
Example: Let L be: {<a, b>, <a, c, d>} and a model M of L. Assume the two matrices are as follows:
a | b | c | d | |
---|---|---|---|---|
a | # | → | → | # |
b | ← | # | # | # |
c | ← | # | # | → |
d | # | # | ← | # |
a | b | c | d | |
---|---|---|---|---|
a | # | → | → | → |
b | ← | # | # | # |
c | ← | # | # | → |
d | ← | # | ← | # |
We can notice that, in the footprint matrix of model M, the pattern (a, d) is allowed to occur, hence, it causes a deviation in comparison with the event log. The fitness between the event log and the model is computed as follows:
In this example, the fitness is .
Token-based replay is a technique that uses 4 counters (produced tokens, consumed tokens, missing tokens and remaining tokens) to compute the fitness of an observation trace based on a given process model in Petri-net notation.[7] These 4 counters record the status of tokens when a trace is replayed on the Petri net. When a token is produced by a transition, produced tokens is increased by 1. When a token is consumed to fire a transition, consumed tokens is increased by 1. When a token is missing to fire a transition, missing tokens is increased by 1. Remaining tokens records the total remaining tokens after the trace is complete. The trace conforms with the process model if and only if there are no missing tokens during the replay and no remaining tokens at the end.
The fitness between an event log and a process model is computed as follows:
where m is the number of missing tokens, c is the number of consumed tokens, r is the number of remaining tokens, p is the number of produced tokens.
Alignments
[edit]Although the token-replay technique is efficient and easy to understand, the approach is designed for Petri net notation and doesn't consider the suitable path generated by the model for the unfit cases. Alignments were introduced to solve the limitations and is considered a highly accurate conformance checking technique and can be applied for any process modeling notation.[8] The idea is that the algorithm performs an exhaustive search to find out the optimal alignment between the observed trace and the process model. Hence, it is guaranteed to find out the most related model run in comparison to the trace.
References
[edit]- ^ Wil van der Aalst (2013). Process Mining: Discovery, Conformance and Enhancement of Business Processes. Springer.
- ^ Carmona, Josep; Dongen, Boudewijn van; Solti, Andreas; Weidlich, Matthias (2018-11-11). Conformance Checking: Relating Processes and Models. Springer. ISBN 978-3-319-99414-7.
- ^ Rozinat, Anne; van der Aalst, Wil (2008). "Conformance Checking of Processes Based on Monitoring Real Behavior". Information Systems. 33 (1): 64–95. doi:10.1016/j.is.2007.07.001.
- ^ Adriansyah, Arya (2014). Aligning Observed and Modeled Behavior (PhD thesis). Eindhoven University of Technology.
- ^ vanden Broucke, Seppe K. L. M.; De Weerdt, Jochen; Vanthienen, Jan; Baesens, Bart (2014). "Determining Process Model Precision and Generalization with Weighted Artificial Negative Events". IEEE Transactions on Knowledge and Data Engineering. 26 (8): 1877–1889. doi:10.1109/TKDE.2013.130. S2CID 14365893.
- ^ van der Aalst, Wil (2016), van der Aalst, Wil (ed.), "Data Science in Action", Process Mining: Data Science in Action, Berlin, Heidelberg: Springer, pp. 3–23, doi:10.1007/978-3-662-49851-4_1, ISBN 978-3-662-49851-4, retrieved 2021-08-12
- ^ Rozinat, A.; van der Aalst, W.M.P. (March 2008). "Conformance checking of processes based on monitoring real behavior". Information Systems. 33 (1): 64–95. doi:10.1016/j.is.2007.07.001. ISSN 0306-4379.
- ^ van der Aalst, Wil; Adriansyah, Arya; van Dongen, Boudewijn (2012-01-30). "Replaying history on process models for conformance checking and performance analysis". WIREs Data Mining and Knowledge Discovery. 2 (2): 182–192. doi:10.1002/widm.1045. ISSN 1942-4787. S2CID 11294562.