• Ei tuloksia

Constructing Minimal Coverability Sets

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Constructing Minimal Coverability Sets"

Copied!
21
0
0

Kokoteksti

(1)

Fundamenta Informaticae XX (2015) 1–21 1 DOI 10.3233/FI-2012-0000

IOS Press

Constructing Minimal Coverability Sets

Artturi Piipponen, Antti Valmari

Department of Mathematics, Tampere University of Technology P.O. Box 553, FI–33101 Tampere, FINLAND

artturi.piipponen@tut.fi, antti.valmari@tut.fi

Abstract. This publication addresses two bottlenecks in the construction of minimal coverability sets of Petri nets: the detection of situations where the marking of a place can be converted toω, and the manipulation of the setAof maximalω-markings that have been found so far. For the former, a technique is presented that consumes very little time in addition to what maintainingAconsumes. It is based on Tarjan’s algorithm for detecting maximal strongly connected components of a directed graph. For the latter, a data structure is introduced that resembles BDDs and Covering Sharing Trees, but has additional heuristics designed for the present use. Results from a few experiments are shown.

They demonstrate significant savings in running time and varying savings in memory consumption compared to an earlier state-of-the-art technique.

Keywords: coverability set, Tarjan’s algorithm, antichain data structure

1. Introduction

Aplace/transition netis a Petri net whose places may contain any finite number of tokens, and tokens are indistinguishable from each other. When people talk about Petri nets without specifying the precise net class, they often mean place/transition nets. The “state” or “configuration” of a place/transition net is calledmarking. It tells the number of tokens in each place. Markings are changed byfiringtransitions. A markingMisreachableif and only if some finite sequence of firings of transitions transforms the initial marking of the net toM. All these notions will be made precise in Section 2.

A place/transition net is finite if and only if its sets of places and transitions are finite. The set of reachable markings of a finite place/transition net is not necessarily finite. This is because with some nets, the number of tokens in one or more places grows without limit. However, Karp and Miller proved that if this growth is approximated from above using a certain kind of extended markings, then a finite set may always be obtained [8]. This set is called coverability set. The extended markings may use the

(2)

symbolω instead of a natural number to denote the number of tokens in a place. They do not have an established name, but it is natural to call them ω-markings. Intuitively, ω denotes “unbounded”. An ω-markingMcoversanω-markingMif and only if, for every placep,M has at least the same number of tokens inp asM has, whereω is considered bigger than any natural number. For every reachable markingM, a coverability set contains anω-marking that coversM. Everyω-marking in the set is either a reachable marking or a limit of a growing infinite sequence of reachable markings.

The coverability set, as defined by Karp and Miller, is not necessarily unique. Finkel proved that there always is a uniqueminimal coverability set[6]. It is always finite and has no other coverability set as a subset. It is obtained from a coverability set by removing eachω-marking that is notmaximal. An ω-marking is maximal if and only if it is not strictly covered by anotherω-marking in the set.

A basic step in the construction of coverability sets consists of firing a transition at anω-marking and then changing the markings of zero or more places from a finite value toω. The details of adding the ω-symbols vary between different coverability set construction algorithms. The construction starts at the initialω-marking and proceeds by constructing newω-markings with the basic step. Newω-markings that are covered by already foundω-markings are rejected sooner or later, depending on the algorithm.

Eventually new non-coveredω-markings are no longer found, and the construction terminates.

The construction of coverability sets is computationally challenging. First, the result may be huge.

This problem may be alleviated by constructing the minimal coverability set, but not eliminated alto- gether. Also the minimal coverability set may be huge. Even when it is not, an algorithm that constructs it may construct a huge number ofω-markings that are later removed because of becoming strictly cov- ered by more recently constructedω-markings.

Second, all known minimal coverability set algorithms maintain in one form or another a setA of maximalω-markings found so far. Here “A” stands for “antichain”. Nobody knows how to implement this set such that all its operations are always fast.

Third, all known coverability set algorithms check a so-called pumping condition every now and then. Thefinding predecessor of anω-marking M is the ω-marking from which M was constructed for the first time. The initialω-marking does not have a finding predecessor. Thefinding historyofM consists of its finding predecessor, the finding predecessor of the finding predecessor, and so on up to the initialω-marking. Thefull historyis defined otherwise similarly, but every, not just the first, instance of constructing theω-marking is taken into account. The basic form of the pumping condition consists of checking whether the most recently foundω-markingMcovers anyω-marking in the finding history of M. Alternatively, the full history can be used. Intuitively, with the full history, valid pumping conditions would be found more often and thus termination would be reached with fewer firings of transitions.

Checking whether M covers M is easy, but the finding history and especially the full history may be big.

Somewhat complicated ideas for speeding up the construction of minimal coverability sets have been suggested [6, 7, 10]. They mostly aim at reducing the number of ω-markings that are constructed but rejected later. They address the organization of the work at a higher level than the implementation of the pumping condition and the setA. A main idea in them is known as “pruning”.

The publications [12, 13] gave both theoretical, heuristic, and experimental evidence that at the higher level, a straightforward approach is very competitive, ifω-markings are constructed in depth-first or so-called most tokens first order. They demonstrate that the pruning sometimes works against its purpose, weakening the performance. They prove two theorems saying that if the full histories are used for detecting pumping conditions, then the benefit that the pruning aims at comes automatically, without

(3)

explicitly implementing the pruning and thus without suffering from the above-mentioned weakness.

The theorems in [12, 13] do not prove that the straightforward approach with full histories is always at least as good as pruning. This is because it was observed in [12, 13] that the running time of their algorithm sometimes depends dramatically on the order in which the transitions are listed in the input.

The same seems likely also for other algorithms. That is, a detail that has nothing to do with the al- gorithms themselves may dominate performance measurements. Sorting the transitions according to a natural heuristic did not solve the problem, because the sorted order was good with some and bad with some other Petri nets. So in this field, one must not trust too much on measurements. This makes it dif- ficult to compare the relative merits of the algorithms. Even so, the available evidence suggests strongly that the algorithm in [12, 13] significantly outperforms the earlier algorithms.

At a low level, the most time-consuming operations in the algorithm in [12, 13] are the two oper- ations mentioned above, that is, the manipulation ofA and the detection of pumping conditions. This publication presents a significant improvement to both. Again, the obtained reduction in running time is so big that, despite the problem with measurements mentioned above, it seems safe to conclude that the algorithm in this publication tends to be significantly faster than all earlier algorithms. Memory-wise the new algorithm is roughly as good as the one in [12, 13].

In a sense, this publication reverses the order in which the pumping condition is checked. In most if not all earlier publications, the finding history, the full history, or some other subset of the ω-markings M that have a path to the current ω-markingM are found, and it is tested whether M covers any of them. In this publication, the ω-markingsM inA that are strictly covered by M are found, and it is tested whether any of them has a path toM. In both approaches, to maintainA, it seems necessary to find the strictly coveredω-markingsM. The new algorithm exploits this work that has to be done anyway. It implements the pumping condition with only a constant amount of extra work per each strictly covered M. This should be contrasted to the earlier approaches, where the finding of the relevantω-markings is much more additional work on top of the updating ofA.

The new algorithm detects the pumping condition for the intersection of the full history with A.

Fortunately, Theorem 5.3 in this publication tells that this yields precisely the same ω-markings as the full history would. Therefore, the two theorems in [12, 13] stay valid.

Section 2 defines the necessary concepts. The overall structure of our new algorithm is presented in Section 3. In Section 4 a sufficient condition is discussed that guarantees that the algorithm has already covered (instead of just will cover) the “future” of an ω-marking. Section 5 describes how Tarjan’s algorithm for detecting maximal strongly connected components of a directed graph [11, 1, 5] can be harnessed to check in constant time whether there is a path fromMtoM. A data structure that improves the efficiency of maintaining A is introduced in Section 6. It uses ideas from BDDs [3] and covering sharing trees [4], and has heuristics designed for minimal coverability sets. An additional optimisation is discussed in Section 7. Section 8 presents a few performance measurements without and with using the new ideas.

This publication is an extended version of [9]. The main differences are that more measurements are reported and significant effort has been put to clarify the proofs of Corollary 4.6 and Theorem 5.3.

A difficulty with both intuitive understanding and formal proofs of coverability set algorithms is that, unlike with Petri nets, if M2 is obtained from M1 by firing transition t, similarly with M2 and M1, and M1 > M1, then we cannot conclude thatM2 > M2. Much worse, we cannot even conclude that M2 ≥ M2. This is because the construction of M2 may involve addition of ω-symbols that are not added when constructing M2 (and that were not inM1 to start with). Corollary 4.6 presents a sufficient

(4)

condition that guarantees M2 ≥ M2. Originally, Corollary 4.6 was used as a lemma in the proof of Theorem 5.3. The new proof of the theorem does not need it. However, because Corollary 4.6 presents an intuitively interesting fact that may be useful when developing new methods for answering verification questions during the construction of the set, we chose to keep it in the present publication. An example of its use is given in Section 4.

2. Concepts and Notation

The set of natural numbers isN={0,1,2, . . .}. IfXis a set, then the set of finite sequences of elements ofXis denoted withX. The empty sequence is denoted withε.

A finiteplace/transition netis a tuple(P, T, W,Mˆ), whereP andT are finite sets,P∩T =∅,W is a function from(P ×T)∪(T ×P)toN, andMˆ is a function fromP toN. The elements ofP,T, and W are calledplaces,transitions, andweights, respectively. AmarkingM is a function fromP toN. It can also be thought of as a vector of|P|natural numbers. ThusMˆ is a marking. It is called theinitial marking.

A transitiontisenabled atM, denoted withM[ti, if and only ifM(p)≥W(p, t)for everyp∈P. Thentmayoccur yielding the markingM such that M(p) = M(p)−W(p, t) +W(t, p)for every p ∈ P. This is denoted withM[tiM. It is also said that tisfiredatM yielding M. The notation is extended to finite sequences of transitions in the natural way: M[t1t2· · ·tniMif and only if there are M0,M1, . . . ,Mnsuch thatM =M0,Mi−1[tiiMifor1≤i≤n, andMn=M.

The notation ∆t1···tn(p)is defined by∆t1···tn(p) = Pn

i=1W(ti, p)−W(p, ti). Clearly∆t1···tn(p) is finite. The equation M = M + ∆t1···tn denotes the claim that for everyp ∈ P we haveM(p) = M(p) + ∆t1···tn(p). IfM[t1t2· · ·tniM, thenM=M+ ∆t1···tnholds.

A markingM isreachable fromM if and only if there isσ ∈T such thatM[σiM. The set of reachable markingsis{M | ∃σ ∈T : ˆM[σiM}, that is, the set of markings that are reachable from the initial marking.

An ω-marking is a vector of |P| elements of the set N∪ {ω}. The enabledness and occurrence rules of transitions are extended to ω-markings with the conventions that for everyi ∈ N, ω > iand ω +i = ω −i = ω. An ω-marking is a marking if and only if it contains no ω-symbols. From now on, M, M, M1, and so on are ω-markings, unless otherwise stated. If M[t1t2· · ·tniM, then M(p) =M(p) + ∆t1···tn(p)holds even ifM(p) =ω, and thenM(p) =ω.

We say thatM coversM and writeM ≤ Mif and only ifM(p) ≤ M(p)for everyp ∈ P. We say thatM strictly coversM and writeM < M if and only ifM ≤ M and M 6= M. We say that M is a limitof a set M ofω-markings if and only if for every i ∈ N, Mcontains an Mi such that M0 ≤ M1 ≤. . .and for everyp ∈P, eitherM(p) =ω andMi(p)grows without limit asigrows, or there isisuch thatM(p) =Mi(p) =Mi+1(p) =. . .. This definition may be a bit unexpected, because it allows the use of the sameω-marking many times inM0 ≤M1 ≤. . .. (So our limits are not precisely the same thing as “limit points” in topology.) As a consequence, every element ofMis a limit ofM, because obviouslyM ≤M ≤. . .. On the other hand, a set may also have limits that are not in the set.

Acoverability setis any setMthat satisfies the following conditions:

1. Every reachable markingM is covered by someM ∈ M.

2. EveryM ∈ Mis a limit of reachable markings.

(5)

A coverability set is not necessarily finite. Indeed, the set of reachable markings is always a coverability set, and it may be infinite. Fortunately, there is a uniqueminimal coverability setthat is always finite [6].

It has no other coverability set as a subset, and noω-marking in it is covered by anotherω-marking in it. It consists of the maximal elements of the set of the limits of the set of reachable markings. Proofs of these facts are not trivial. Among others, [12, 13] contain an attempt to make the proofs accessible.

The construction of coverability sets resembles the construction of the set of reachable markings but has additional features. The basic step of the construction starts with an already foundω-markingM. A transition tis fired atM, yieldingM. A central idea is that if there areM0 andσ such thatM0 < M andM0[σiM, then the sequenceσtcan occur repeatedly without limit, making the markings of thosep that haveM(p)> M0(p)grow without limit, while the remainingphaveM(p) =M0(p). The limit of the resulting markings isM′′, whereM′′(p) =ωifM(p)> M0(p)andM′′(p) =M0(p)otherwise.

Roughly speaking, instead of storingM and remembering thatM[tiM, most if not all algorithms storeM′′and remember thatM−t→ω M′′, whereM−t→ω M′′denotes thatM′′was obtained by firingt atM and then possibly addingω-symbols to the result as was discussed above. However, although this rough description of the basic step of the algorithms serves well as a starting point, it is too imprecise for deeper discussion. There are four issues.

First, the algorithms need not remember thatM−t→ω M′′. It suffices to remember that there istsuch thatM−t→ω M′′.

Second, instead ofM0[σiM in the above, the algorithms useM0−σ→ω M, because they only have access to the latter. Here −σ→ω is the natural extension of −t→ω : M−t1t2· · ·tnω M if and only if there areM0,M1, . . . ,Mnsuch thatM =M0,Mi−1−ti

ω Mifor1≤i≤n, andMn=M.

Third, after firingtatM, an algorithm may use more than oneM0 andσthat haveM0−σ→ω M to add ω-symbols. For instance, if(2,0) [t1i(0,1) [ti(1,1), then(0,1) and εjustify converting (1,1) to (ω,1), after which(2,0)andt1justify further conversion to(ω, ω). To discuss this, let us first introduce some terminology.

Assume thatMis obtained by firingtatMand then performing zero or more “pumping operations”

that are defined soon. We say that thepumping condition semi-holdswithM0 if and only ifM0 < M and there isσ ∈ T such thatM0−σ→ω M. The pumping conditionholds if and only if it semi-holds and there isp ∈P such thatM0(p) < M(p) < ω. Thepumping operationconsists of assigningω to thoseM(p)that haveM0(p)< M(p)< ω.

The algorithms perform the pumping operation only when the pumping condition holds. After it, the pumping condition semi-holds but does not hold. Some algorithms in [12, 13] and all new algorithms in this publication never fail to perform the pumping operation when the pumping condition holds, but this is not necessarily true of all algorithms.

As a consequence of the pumping operation, the pumping condition may start to hold with another M0with which it did not originally hold. This may trigger a new pumping operation.

Fourth, after firingM[tiM and doing zero or more pumping operations, an algorithm may reject the resulting M, if it is covered by some already storedω-markingM′′. The intuition is that whatever M could contribute to the minimal coverability set, is also contributed by M′′. So M need not be investigated.

WhetherM−t→ω M holds depends on not justM,t, andM, but also on what the algorithm has done before trying tatM. So the precise meaning of the notation M−t→ω M depends on the partic- ular algorithm and may even depend on the order in which the places and transitions are listed in the input to the algorithm. We emphasize that this phenomenon is not specific to our algorithm but affects

(6)

coverability set construction algorithms in general. Most publications leave this issue implicit, trusting that the readers get sufficient information from the description of the algorithm. The precise meaning of M−t→ω Mused in this publication will be made explicit in Definition 3.1. The definition is technical, but intuitively it is the same concept as in essentially all publications on the construction of coverability sets:Mis what the algorithm produces by first firingtatM and then possibly addingω-symbols.

3. Overall Algorithm

Figure 1 shows the new minimal coverability set construction algorithm of this publication in its basic form. Variants of it will be discussed in Sections 7 and 8.

Lines 1, 3–6, 13–16, 18, 20–24, and 27 implement most of the minimal coverability set construc- tion algorithm of [12, 13], specialized to depth-first search. Let us discuss them in this section. The remaining lines are grey to indicate that they may be ignored until they are discussed in later sections.

(The pseudocode in [12, 13] used a generic search discipline. Breadth-first search, depth-first search, and so-called most tokens first search were studied in detail. Some details related to the implementation of the pumping operation have been replaced in this publication by another mechanism.)

The setAcontains the maximalω-markings that have been found so far. Upon termination it contains the result of the algorithm. Its implementation will be discussed in Section 6. In addition to what is explicit in Figure 1, the call on line 22 may remove elements fromAin favour of a new elementMthat strictly covers them. We will discuss this in detail later.

The setF is a hash table. ω-markings are added to it at the same time as toA, but they are never removed from it. So alwaysA ⊆ F. The attributes of anω-marking such as M.tr(discussed soon) are stored inF and not in A. That is, F contains records, each of which contains an ω-marking and some additional information. One reason is that, as we will see later, some information on anω-marking remains necessary even after theω-marking has been removed fromA. Another reason is that, as will become obvious in Section 6, the data structure that is discussed there cannot associate attributes to the ω-markings that it stores. Like in [12, 13], F is also used to implement an optimisation that will be discussed together with lines 18 and 20. Hash tables are very efficient, soF does not cause significant extra cost.

For efficiency, instead of the common recursive implementation, depth-first search is implemented with the aid of a stack which is calledW (for work-set). (So the symbolW is used both for the weight function of Petri nets and for the workset of the algorithm, but these uses are so remote from each other that confusion seems unlikely.) The elements ofW are pointers to records inF. Eachω-markingMhas an attributetrthat points to the next transition that should be tried atM.

The algorithm starts on lines 1 and 2 by putting the initial marking to all data structures. Roughly speaking, lines 3 to 27 try each transitiontat each encounteredω-markingM in depth-first order. (This is not strictly true, because heuristics that are discussed later may prematurely terminate the processing ofM and may cause the skipping of some transitions atM.) IfM has untried transitions, line 4 picks the next, otherwise lines 6–13 that implement backtracking are executed. Lines 7–12 will be discussed later.

If the picked transition t is disabled at the current ω-marking M, then it is rejected on line 15.

Otherwisetis fired atM on line 16. Lines 17 and 19 will be discussed later. IfM has already been encountered, it is rejected on lines 18 and 20. This quick rejection ofM is useful, because reaching the

(7)

1 F :={Mˆ};A:={Mˆ};W.push( ˆM);M .trˆ := first transition

2 S.push( ˆM);M .readyˆ :=false;nf :=1;M .indexˆ :=1;M .lowlinkˆ :=1 3 whileW 6=∅do

4 M :=W.top;t:=M.tr;ift6=nilthenM.tr:= next transitionendif 5 ift=nilthen

6 W.pop

7 activate transitions as discussed in Section 7 8 ifM.lowlink=M.indexthen

9 whileS.top6=W.topdoS.top.ready:=true;S.popendwhile

10 else

11 W.top.lowlink:= min{W.top.lowlink, M.lowlink}

12 endif

13 go toline 3 14 endif

15 if¬M[tithen go toline 3endif 16 M:= theω-marking such thatM[tiM 17 ifM ≤M thenpassivatet;go toline 3endif 18 ifM ∈F then

19 if¬M.readythenM.lowlink:= min{M.lowlink, M.lowlink}endif 20 go toline 3

21 endif

22 Cover-check(M, A) // only keep maximal — may updateAandM 23 ifMis covered by an element ofAthen go toline 3endif

24 F :=F∪ {M};A:=A∪ {M};W.push(M);M.tr:= first transition 25 S.push(M);M.ready:=false

26 nf :=nf + 1;M.index:=nf;M.lowlink:=nf 27 endwhile

Figure 1. A minimal coverability set algorithm that uses Tarjan’s algorithm and some heuristics

sameω-marking again is expected to be very common, becauseM[t1t2iM12andM[t2t1iM21imply thatM12 =M21. Without lines 18 and 20, M would be rejected on line 23, but after consuming more time. Line 18 is also needed because of line 19.

The call Cover-check(M, A) first checks whether M is covered by someω-marking in A. The details of this are discussed in Section 6. IfM is covered, thenMis rejected on line 23.

In the opposite case, Cover-check checks whether the pumping condition holds with anyM0 ∈ A.

(In [12, 13], the pumping condition was detected forM0 ∈F. Theorem 5.3 will tell why it suffices to useA instead.) If yes, it changesM(p)toω for the appropriate placesp. Cover-check also removes fromAthose ω-markings that the updatedM covers strictly. WhenM is removed,M.tris set tonil, so that even if the algorithm backtracks toM in the future, no more transitions will be fired at it. The addition ofω-symbols makesM grow in the “≤” ordering and may thus make the pumping condition hold with some other M0. Cover-check continues until there is no M0 ∈ A with which the pumping condition holds. How Cover-check finds thoseM0 ∈ Athat haveM0 < M is discussed in Section 6.

(8)

The checking of the existence of a path fromM0 toM is explained in Section 5.

If M was not covered, its updated version is added to the data structures on lines 24–26. This implements the entering toMin the depth-first search.

It is the time to make the notationM−t→ω Mprecise.

Definition 3.1. M−t→ω M denotes that twas fired at M on line 16 resulting in some M′′ such that M′′6≤M, and eitherM′′∈F held on line 18 (in which caseM =M′′), orM′′was transformed toM on line 22 and then added toFon line 24.

ThusM−t→ω Mis either always false, or becomes true during the execution of the algorithm and never becomes false again. When it holds, thenM ∈ F andM ∈ F. The algorithm will consider the edge M−t→ω Mwhen detecting pumping conditions, as will be discussed in Section 5. IfM is rejected on line 17 or 23, thenM−t→ω Mdoes not become true,M ∈F may fail to hold, and the algorithm forgets the result of firingtatM. The correctness of this forgetting on line 17 will be discussed in Section 7 (and is intuitively obvious). The case of line 23 is much trickier. It follows from the correctness of the algorithm as a whole, which was proven in [12, 13].

Even ifM ∈F and M[ti, it may be that there never is anyM such thatM−t→ω M. This is the case ifM is removed fromAandM.tris set tonilbeforetis tried atM, or if the result of tryingtatM is rejected on line 17 or 23. With the following Petri net, the latter happens althoughM ∈ Awhen the algorithm has terminated: 2 t .

Ignoring the lines added in this publication (and without restricting to depth-first search discipline), the correctness of the algorithm has been proven in detail in [12, 13]. One detail of the proof will be used in later sections of the present publication, so we present it as a lemma. It holds always during and after the execution of the algorithm, except in the middle of the updating operations mentioned in its proof.

Lemma 3.2. For everyM0∈F, there isM0 ∈Asuch thatM0 ≤M0. Proof:

Each time when anω-markingM0 is inserted to F, it is also inserted to A. Each time when an M0 is removed from A, anM0 such thatM0 < M0 is inserted to A. Therefore, the algorithm maintains the

claimed property as an invariant. ⊓⊔

4. Ripe ω -markings

Lemma 3.2 guarantees that allω-markings that can be reached by firing transition sequences starting at a foundω-markingM,will eventually be coveredby theω-markings found by the algorithm. IfM /∈A and the firing of some transitions has not yet been tried atM, the lemma justifies leaving them untried forever. The goal of this section is to present and prove correct a condition that guarantees that allω- markings that can be reached by firing transition sequences starting atM,have already now been covered by theω-markings found by the algorithm.

To liberate the symbolM for other uses, from now on we denote theM of the algorithm withMc and call it thecurrentω-marking. The sameω-marking may be current several times, because the search may continue to otherω-markings and later backtrack to it.

The following lemma follows trivially from the transition firing rule and the fact that after firing a transition, the marking of a place may be converted from a finite value toωbut not vice versa.

(9)

Lemma 4.1. IfM−σ→ω MandM(p) =ω, thenM(p) =ω.

In depth-first search, anω-markingM iswhiteif it has not been found (that is, M /∈ F);greyif it has been found but not backtracked from (that is,M ∈ W); andblackif it has been backtracked from.

The greyω-markingsMigand theMi−1g −tiω

→Migvia which they were first found constitute a path from the initialω-markingM0g = ˆM toMc.

Definition 4.2. An ω-marking M is fresh-ripe if and only if M ∈ A, and either the algorithm has terminated, or currently for some placeprwe haveMc(pr)< ω =M(pr). It isripeif and only if it is or has been fresh-ripe.

A fresh-ripeω-marking may cease from being fresh-ripe by being removed fromAor by the current ω-marking changing such that Mc(pr) < ω no longer holds. A ripe ω-marking obviously remains ripe until the termination of the algorithm. The following lemmas tell more properties of (fresh-)ripe ω-markings.

Lemma 4.3. IfMis ripe, then the algorithm has tried every transition atM.

Proof:

It suffices to prove the claim for an arbitrary fresh-ripe M, because the promised property obviously remains valid afterwards. It is obvious from Figure 1 that after anω-marking has been removed fromA, it is never again put toA. BecauseM ∈A, the investigation of transitions atMhas not been prematurely terminated by setting M.trtonil. Therefore, if the algorithm has not tried every transition atM, then M is grey. So the algorithm has not terminated and there is a path fromM toMc. Lemma 4.1 yields a

contradiction withMc(pr)< ω=M(pr). ⊓⊔

Lemma 4.4. IfM0 is fresh-ripe, M0 ≥M0, andM0[t1iM1[t2i · · · [tniMn, then for1 ≤i≤nthere are fresh-ripeMisuch thatMi≤MiandMi≥Mi−1 + ∆ti ≥M0 + ∆t1···ti.

Proof:

We use induction on0≤i≤n. The base casei= 0only says thatM0 ≥M0. It obviously holds.

Assume that the claim holds withi−1. By Lemma 4.3, the algorithm has tried ti atMi−1 . That yielded a resultMi′′′ such thatMi′′′ ≥Mi, becauseMi−1 ≥Mi−1andMi−1[tiiMi. For anyp∈P, if Mi′′′(p)was converted toω, thenMi′′′(p) =ω ≥Mi−1 (p) + ∆ti(p), otherwiseMi′′′(p) = Mi−1 (p) +

ti(p). IfMi′′′ was added to A, we choose Mi′′ = Mi′′′, otherwise Mi′′′ was rejected because it was covered by anω-marking inAwhich we callMi′′. In both casesMi′′′ ≤Mi′′. Since then,Mi′′may have been removed fromA, but by Lemma 3.2, there isMi ∈Asuch thatMi ≤Mi′′′≤Mi′′≤Mi.

We haveMi ≥Mi′′′ ≥Mi−1 + ∆ti ≥M0 + ∆t1···ti1 + ∆ti =M0 + ∆t1···ti. If the algorithm has terminated, thenMiis fresh-ripe. Otherwise there ispr ∈ P such thatMc(pr) < ω =M0(pr). By the above inequalityMi(pr)≥ω+ ∆t1···ti(pr), implyingMi(pr) = ω. SoMi is fresh-ripe. In conclusion,

Mihas the required properties. ⊓⊔

Lemma 4.5. IfM0 is fresh-ripe,M0 ≥M0, andM0−t1ω

→M1−t2ω

→ · · · −tn

ω Mn, then for1≤i≤ nthere are fresh-ripeMi such thatMi ≤MiandMi ≥Mi−1 + ∆ti ≥M0 + ∆t1···ti.

(10)

Proof:

Like in the proof of Lemma 4.4, we use induction on0≤i≤n. The base casei= 0obviously holds.

Assume thatMi−1 is fresh-ripe andMi−1 ≤ Mi−1 . We have to prove that there is a fresh-ripeMi such thatMi ≤ Mi andMi ≥ Mi−1 + ∆ti. The situation is different from Lemma 4.4, because with Mi−1−ti

ω Mi, there may bep1, . . . , pk such thatMi−1(pj) < ω = Mi(pj). Therefore, we apply induction on0 ≤j ≤ k. We may assume thatp1, . . . ,pkare listed in the order in which the algorithm made Mi(pj) = ω hold. Let Mi,j be the ω-marking just after the algorithm madeMi(pj) = ω. So Mi−1[tiiMi,0andMi,k=Mi. For0≤j ≤k, we will show the existence of a fresh-ripeMi,j such that Mi,j ≤Mi,j . Actually, theseMi,j will prove equal, but the subscriptjhelps to follow the proof.

BecauseMi−1 is fresh-ripe,Mi−1 ≤Mi−1 , andMi−1[tiiMi,0, Lemma 4.4 yields a fresh-ripeMi,0 such thatMi,0 ≤Mi,0 andMi,0 ≥Mi−1 + ∆ti. This is the base casej = 0.

For the induction step on j, assume that Mi,j−1 ≤ Mi,j−1 and Mi,j−1 is fresh-ripe. Let σj be the pumping sequence that justified convertingMi,j−1 toMi,j. For anyp∈ P, ifMi,j−1(p) < ω, then

σj(p)≥0. Letσjhbeσjrepeatedhtimes, whereh∈N. There isMi,j,hsuch thatMi,j−1hjiMi,j,h ≥ Mi,j−1. Lemma 4.4 yields a fresh-ripeMi,j,h such thatMi,j,h≤Mi,j,h andMi,j,h ≥Mi,j−1 + ∆σh

j. For anyp∈P, ifMi,j−1(p)< ω, thenMi,j,h (p)≥Mi,j−1 (p), and otherwiseMi,j−1 (p) =ω =Mi,j,h (p).

SoMi,j,h ≥Mi,j−1 .

Because Mi,j−1 and Mi,j,h are fresh-ripe, they all belong to A. Because Ais the set of maximal ω-markings found so far, we haveMi,j−1 6< Mi,j,h . SoMi,j−1 =Mi,j,h . This holds for everyh ∈N. IfMi,j(p) = ω > Mi,j−1(p), thenMi,j,h(p)grows without limit ashgrows, implyingMi,j,h (p) =ω.

With the remainingp,Mi,j,h (p) =Mi,j−1 (p) ≥Mi,j−1(p) = Mi,j(p). These yieldMi,j ≤Mi,j,h . So Mi,j,h =Mi,j−1 qualifies as theMi,j . The induction proof onjis ready.

Choosing Mi = Mi,k completes the proof of induction stepi. Because Mi,k = Mi,k−1 = . . . =

Mi,0 , we haveMi ≥Mi−1 + ∆ti. ⊓⊔

A fresh-ripe ω-marking may cease from being fresh-ripe. This restricts the applicability of Lem- mas 4.4 and 4.5. To alleviate this problem, the main message of the lemmas is formulated below anew in such a form that after theω-marking has become ripe, the claim applies continuously up to the termi- nation of the algorithm. The [tii-claim tells that whateverthe net could dofromM0 has been covered by the algorithm, and the −ti

ω -claim tells that whatever the algorithm has done fromM0 has been covered by the algorithm. The latter is not necessarily a subset of the former, because the net cannot but the algorithm can change the marking of a place from a finite value toω. In both cases, the algorithm has completed the investigation of the part of the coverability set that is reachable fromM0orM0. Corollary 4.6. IfM0 is ripe,M0 ≥M0, andM0[t1iM1[t2i · · ·[tniMnorM0−t1ω

→ · · · −tn

ω Mn, then for1≤i≤nthere are ripeMi ∈F such thatMi ≤Mi andMi ≥Mi−1 + ∆ti ≥M0 + ∆t1···ti. The algorithm will never again try transitions at anyMi′′that hasMi′′≤Mi.

Proof:

Lemmas 4.4 and 4.5 promise the existence of Mi at the time when M0 becomes fresh-ripe, and the promised properties ofMi remain valid from then on. It is an elementary property of the algorithm that it does not try any transitions atω-markings that are not inA, and noMi′′such thatMi′′< Mi can enter A. By Lemma 4.3, the algorithm will not try any transitions atMi while it stays inA. ⊓⊔

(11)

As an example of an application of the corollary, let transitionsenterandleavemodel a pedestrian entering and leaving a zebra crossing, and lettruckmodel a truck driving through the zebra crossing. We want to verify that after each occurrence ofenter,truckdoes not occur beforeleavehas occurred. As demonstrated with an example in Section 3, the algorithm does not necessarily keep track of all firings of transitions. So the verification should be based on firing transitions anew at foundω-markings. To find errors as soon as possible, we do not want to first wait until the termination of the construction of the minimal coverability set. Instead, we want to do the check every now and then during the construction.

Definition 4.2 and Corollary 4.6 tell when it is reasonable to do such a check.

5. Constant-time Reachability Testing

Amaximal strongly connected componentorstrong componentof a directed graph(V, E)is a maximal set of vertices V ⊆ V such that for any two verticesu andv inV, there is a path from utov. The strong components constitute a partition ofV. Tarjan’s algorithm [11, 1] detects strong components in timeO(|V|+|E|). It is based on depth-first search of the graph. It is slower than depth-first search only by a small constant factor. A particularly good description of an optimised version of it is in [5].

In our case,V consists of allω-markings that are encountered and not rejected during the construction of the minimal coverability set, that is, those that are (eventually) stored inF. The edges are defined by (M, M) ∈ E if and only if there is t ∈ T such that (eventually) M−t→ω M. This notion, and thus also V and E, depends on the order in which transitions are picked on lines 1, 4, and 24 in Figure 1.

Fortunately, this does not confuse Tarjan’s algorithm, because an edge is introduced either when the algorithm is ready to investigate it or not at all.

In Figure 1, Tarjan’s algorithm is represented via lines 2, 8–12, 19, and 25–26. In addition toW, it uses another stack, which we callS. Also its elements are pointers to records inF.

Tarjan’s algorithm also uses two attributes on eachω-marking calledindexandlowlink. The index is a running number that theω-marking gets when it is encountered for the first time. It never changes afterwards. The lowlink is the smallest index of anyω-marking that is known to belong to the same strong component as the currentω-marking. When backtracking and when encountering anω-marking that has already been visited and is in the same strong component with the currentω-marking, the lowlink value is backward-propagated and the smallest value is kept. The lowlink value is not backward-propagated fromω-markings that belong to already completed strong components.

Eachω-marking is pushed toSwhen it is found. It is popped fromSwhen its strong component is ready, that is, the algorithm knows precisely whichω-markings belong to the strong component. After that, the ω-marking never returns to S. Presence in S is tested quickly via an attribute ready that is updated whenSis manipulated.

It is easy to check from Figure 1 that an ω-marking is put at the same time to F, A, W, andS, removed from them at different times or not at all, and never again put to any of them. Nothing is removed fromF. AlwaysA⊆ F and W ⊆ S ⊆ F. When the algorithm terminates,W =S =∅. If W is empty on line 8, then M = ˆM, and the algorithm terminates without going via line 11. This is why the reference toW.top.lowlinkon line 11 is always well-defined. On line 9, the testS.top6=W.top yields true ifW is empty butSis not, and false if both are empty.

The following is the central invariant property of Tarjan’s algorithm.

(12)

Lemma 5.1. LetM0 ∈ F. There is a path fromM0 toMc (that is, theM of Figure 1) if and only if M0 ∈S. IfM0 ∈/ S, then everyω-marking to which there is a path fromM0 is neither inSnor inW. Cover-check(M, A) has to find eachM0 such thatM0 ∈ Aand M0 < M, because they have to be removed fromA. When it has found such anM0, it checks whetherM0.ready=false, that is, whether M0 ∈ S. This is a constant-time test that reveals whether there is a path from M0 toM. In this way Cover-check detects each holding pumping condition whereM0 ∈ A with a constant amount of additional effort per removed element ofA.

If ω-symbols are added toM, then the checking is started again from the beginning, because the updatedMmay cover strictly elements ofAthat the originalMdid not cover strictly. Also they have to be removed and checked against the pumping condition. When Cover-check terminates, there are no holding instances of the pumping condition whereM0 ∈A, and Ano longer containsω-markings that are strictly covered byM.

This method only detects the cases whereM0 ∈A, while [12, 13] useM0∈F. Fortunately, the next theorem tells that it does not make a difference. We first prove a lemma.

Lemma 5.2. Assume that, unlike in the algorithm, the pumping condition is checked against each ω- marking inF (instead ofA). Assume further thatM0−t1ω

→ . . .−tn

ω MnandMn =Mc(whereMc

is the currentω-marking). Assume also thatiandMisatisfy0≤i≤n,Mi ≤Mi, andMi ∈F. Then the following hold:

(a) There is a path fromMitoMi.

(b) For everyp∈P, eitherMi(p) =ωorMi(p) =Mi(p).

(c) IfMc∈A, then there is a path fromMitoMc. Proof:

(a) If Mi = Mi, then the claim holds trivially, so let Mi < Mi. Because the algorithm did not reject Mi on line 23, it found Mi afterMi. When Mi was found, Mi was put into S. Because Mi−ti+1· · ·tn

ω Mc, by Lemma 5.1Mi is still in S. SoMi was inS when Mi was found. At that momentMibecame the currentω-marking, so by Lemma 5.1 there was a path fromMitoMi. Paths do not disappear, so this path still exists.

(b) IfMi =Mi, then the claim holds trivially, so letMi < Mi. By it, (a), and the assumption that the pumping condition is checked againstF, the pumping condition semi-held withMiwhenMiwas found. So, for everyp∈P, eitherMi(p) =Mi(p),Mi(p)was converted toω, orMi(p)wasωto start with.

(c) If the claim does not hold, there is a maximaljand for thatjthere is a maximalMjthat satisfy the assumptions of the lemma but there is no path fromMj toMc. To derive a contradiction, assume that suchjandMj exist. Because there is a path fromMjtoMc, we getMj 6=Mj, soMj < Mj. BecauseMj < Mj ∈ F,Mn =Mc ∈ A, andAconsists of maximalω-markings inF, we have j < n. SoMj+1and the edgeMj−tj+1ω

→Mj+1exist. There are two possibilities:tj+1has or has not been tried atMj. We derive a contradiction from each one in turn.

(c1) Assume thattj+1has been tried atMj. Let the result be calledMj+1′′ .

(13)

(c1.1) IfMj+1′′ was kept by the algorithm, there is the edgeMj−tj+1ω

→Mj+1′′ . Consider any pumping condition that held with someM$when firingMj−tj+1ω

→Mj+1. BecauseMj < Mj, and because by (a) there is a path fromMj toMj, the condition semi-held withM$ when firingtj+1atMj. So Mj+1′′ hasω-symbols in at least the same places asMj+1has, and we concludeMj+1 ≤Mj+1′′ . A path fromMj+1′′ toMcwould yield a path fromMj toMc, so there is no path fromMj+1′′ toMc. But then Mj+1′′ ∈ F, Mj+1 ≤ Mj+1′′ , and there is no path from Mj+1′′ toMc, contradicting the assumption thatjis the biggest possible.

(c1.2) Assume thatMj+1′′ was rejected by the algorithm. That is,Mj[tj+1iMj+1′′ and when Mj+1′′ was constructed, there was anMj+1 ∈A⊆F such thatMj+1′′ < Mj+1 . We show thatMj+1 ≤Mj+1 . It clearly holds if Mj+1 = Mj+1 , so assume that Mj+1 6= Mj+1 . BecauseMj+1 did not cause the rejection of the result of firing tj+1 atMj, Mj+1 has been found after this firing and thus after Mj+1. Because Mj+1 is still inS, it was in S when Mj+1 was found. So by Lemma 5.1 there is a path from Mj+1 to Mj+1 . By Lemma 4.1, if Mj+1(p) = ω, then Mj+1 (p) = ω. If Mj+1(p)< ω, thenMj+1 (p)≥Mj+1′′ (p) =Mj(p) + ∆tj+1(p)≥Mj(p) + ∆tj+1(p) =Mj+1(p).

SoMj+1 ≤Mj+1 .

ThusMj+1 satisfies the assumptions of the lemma. So there is a path fromMj+1 toMc, because otherwise j would not be the biggest possible. SoMj+1 is inS now, was in S whenMj+1′′ was constructed from Mj, and there is a path fromMj+1 toMj. By Lemma 4.1, for eachp ∈ P, if Mj+1 (p) =ωthenMj+1′′ (p) = Mj(p) =ω. By (b), eachp ∈P has eitherMj+1 (p) = Mj+1(p) orMj+1 (p) =ω. Therefore, ifMj+1 (p)< ωthenMj+1 (p) =Mj+1(p)≤Mj+1′′ (p). These imply thatMj+1′′ ≥Mj+1 , which is in contradiction withMj+1′′ < Mj+1 .

(c2) Assume thattj+1has not been tried atMj. Because there is no path fromMj toMc, the algorithm has backtracked from Mj. Therefore, after Mj was found but before tj+1 was tried at Mj, an Mj′′was found and put toF such thatMj < Mj′′, and consequently Mj.trwas set tonil. By the assumption of the maximality ofMj, there is a path fromMj′′toMc. SoMj′′∈Snow and was in Swhen the algorithm backtracked fromMj. So there is a path fromMj′′toMj. By Lemma 4.1, for eachp∈P, ifMj′′(p) =ωthenMj(p) =ω. By (b), for eachp∈P, eitherMj′′(p) =Mj(p) =ω orMj′′(p) =Mj(p)≤Mj(p). SoMj′′≤Mj, which is in contradiction withMj < Mj′′.

We are ready to prove the main theorem of this section.

Theorem 5.3. The algorithm in Figure 1 constructs the same ω-markings as it would ifF were used instead ofAin the pumping conditions.

Proof:

In this case the pumping condition isM > M0−σ→ω Mc[tiM#, whereM0(p)< M(p)< ωholds for at least one placep,σ = t1· · ·tn ∈ T, andM has been made fromM#by replacing the contents of zero or more places byωin earlier pumping operations. By Lemma 5.1, from eachω-marking inSand from noω-marking inF \Sthere is a path toMc. The pumping condition triggers the updating ofM toM′′such that for everyp∈P, eitherM0(p) =M(p) =M′′(p)orM0(p)< M(p)≤M′′(p) =ω.

We prove the claim by induction. We show that in every pumping operation,Acauses (at least) the same updates as F, the induction assumption being that also in the previous timesAcaused the same updates as F. At least the same updates implies precisely the same updates, because A ⊆ F. The

(14)

induction assumption implies that the data structures have the same contents as they would have ifF had been used instead ofA. In particular, it justifies the use of Lemma 5.2.

Let the pumping condition hold such thatM0 ∈F. Lemma 3.2 yieldsM0 ∈Asuch thatM0 ≤M0. We haveMc ∈ Abecause the algorithm is currently tryingtatMc. By Lemma 5.2 (c), there is a path from M0 toMc. By Lemma 5.2 (b), for every p ∈ P, either M0(p) = ω or M0(p) = M0(p). If M0(p) =ω, then alsoM(p) =ωby Lemma 4.1. So the pumping condition holds withM0 and causes

precisely the same result asM0causes. ⊓⊔

6. A Data Structure for Maximal ω -Markings

This section presents a data structure for maintaining A. It has been inspired by Binary Decision Dia- grams [3] and Covering Sharing Trees [4]. However,ω-markings are only added one at a time. So we are not presenting a symbolic approach. The purpose of using a BDD-like data structure is to facilitate fast detection of situations where anω-marking covers another. The details of the data structure have been designed accordingly. We will soon see that they make certain heuristics fast.

We call theM on line 22 of Figure 1 thenewω-marking, while those stored inA areold. Cover- check first uses the data structure to detect ifM is covered by any oldω-marking. If yes, then nothing more needs to be done. In the opposite case, Cover-check then searches for old ω-markings that are covered byM. They are strictly covered, because otherwise the first search would have detected them.

The second search cannot be terminated when one is found, because Cover-check has to remove all strictly coveredω-markings from Aand use them in the pumping test. Therefore, finding quickly the first oldω-marking that is covered by M is less important than finding quickly the first oldω-marking that coversM. As a consequence, the data structure has been primarily optimised to detect if any old ω-marking covers the new one, and secondarily for detecting covering in the opposite order.

Let M(p) = M(p) ifM(p) < ωand M(p) = 0 otherwise. Let M(p) = 1 if M(p) = ω and M(p) = 0otherwise. In this section we assume without loss of generality thatP ={1,2, . . . ,|P|}.

The data structure consists of|P|+ 1layers. The topmost layer is an array of pointers that is indexed with the total number ofω-symbols in anω-marking, that is,P|P|

p=1M(p). This number can only be in the range from0to|P|, so a small array suffices. An array is more efficient than the linked lists used at lower layers. The pointer at indexwleads to a representation of the set ofω-markings inAthat have w ω-symbols each. The example in Figure 2 has |P| = 4. It contains ω-markings with one or two ω-symbols and does not containω-markings with zero, three, or fourω-symbols.

Layer|P|consists of|P|+ 1linked lists, one for each total number ofω-symbols. Each nodev in the linked list numberw contains a valuev.m, a pointer to the next node in the list, and a pointer to a representation of thoseω-markings inAthat haveP|P|

p=1M(p) =wandP|P|

p=1M(p) =v.m. The list is ordered in decreasing order of themvalues, so that theω-markings that have the best chance of covering M come first. Eachω-marking in the example of Figure 2 has either oneω-symbol and a total of six tokens, or twoω-symbols and a total of eight, seven or four tokens.

Let1 ≤ℓ <|P|. Each nodevon layerℓcontains two values v.wand v.m, a link to the next node on the same layer, and a link to a node on layerℓ−1. Of course, this last link isnilifℓ= 1. If precisely one path leads tov, thenvrepresents the set of thoseω-markings inA that haveP

p=1M(p) = v.w, P

p=1M(p) =v.m, and the places greater thanℓhave the uniqueω-markings determined by the path, as will be discussed below. If more than one path leads tov, thenvrepresents more than one subset of

Viittaukset

LIITTYVÄT TIEDOSTOT

On the other side, the Qurʾānic disciplines (ʿulūm al-Qurʾān) have developed a number of textual and hermeneutical tools to contextalize the text in view of the general

The new European Border and Coast Guard com- prises the European Border and Coast Guard Agency, namely Frontex, and all the national border control authorities in the member

In Erbakan’s view, Turkey and the Western world belonged in altogether different civilizations, and in political, cultural and religious spheres, Turkey had nothing to do with

Interestingly, on the same day that AUKUS saw the light of day, the EU launched its own Indo-Pacific strategy, following regional strate- gy papers by member states France –

Updated timetable: Thursday, 7 June 2018 Mini-symposium on Magic squares, prime numbers and postage stamps organized by Ka Lok Chu, Simo Puntanen. &amp;

cal distance. The form of  telemedicine used here is televideoconsultation in which the patient is physically at  the office  of  a health centre physician, 

The researchers involved in this study suggest that children’s own experiences of languages fundamentally affect the way in which they see and make sense of the world, in other

The effects of the decreased land-based nutrient load can best be seen in the eastern part of the gulf, where nutrient discharges have been reduced the most, and where the main