Lines Matching defs:the

22 The {\em parameter domain} of $S$ is the set
49 The {\em parameter domain} of $R$ is the set
55 The {\em domain} of $R$ is the polyhedral set
65 The {\em range} of $R$ is the polyhedral set
77 then the composition of
93 The difference set ($\Delta \, R$) of $R$ is the set
94 of differences between image elements and the corresponding
109 For rational sets, the obvious choice would be to compute the
110 (rational) convex hull. For integer sets, the obvious choice
111 would be the integer hull.
117 Usually, it is not required to compute the exact integer hull,
120 and it is defined as the (inclusion-wise) smallest basic set
122 the constraints in the input set.
123 This means that the simple hull is relatively cheap to compute
124 and that the number of constraints in the simple hull is no
125 larger than the number of constraints in the input.
136 is the set
146 with $\vec K_i$ the (component-wise) smallest non-negative integer vectors
151 If any LP problem is unbounded, then the corresponding constraint
159 is used to solve many problems within the context of the polyhedral model.
164 \shortcite{BouletRe98,Verdoolaege2005experiences} and lies at the core
165 of the internal representation of {\tt isl}.
171 generating functions \cite{Woods2003short} and was inspired by the
175 In the following sections, we briefly recall the dual simplex
179 there are two tableaus, one for the main problem and one for
180 the constraints on the parameters, known as the context tableau.
181 The handling of the context tableau is described in \autoref{s:context}.
186 In {\tt isl}, the dual simplex method uses the same representation
187 as that used by its incremental LP solver based on the \emph{primal}
190 was derived from the work of \shortciteN{Nelson1980phd}.
191 In the original \shortcite{Nelson1980phd}, the tableau was implemented
192 as a sparse matrix, but neither {\tt Simplify} nor the current
195 Given some affine constraints on the variables,
196 $A \vec x + \vec b \ge \vec 0$, the tableau represents the relationship
197 between the variables $\vec x$ and non-negative variables
198 $\vec y = A \vec x + \vec b$ corresponding to the constraints.
201 \end{pmatrix}$ and expresses the constraints $\vec y$ in the rows in terms
202 of the variables $\vec x$ in the columns. The main operation defined
205 As in the \texttt{PipLib} implementation,
208 zero and each row variable is assigned the constant term of the row.
210 is assigned a non-negative value, i.e., if the constant terms of
221 the new row variable will have a positive sample value $n$.
222 If no such column can be found, then the problem is infeasible.
223 By always choosing the column that leads to the (lexicographically)
224 smallest increment in the variables $\vec x$,
225 the first solution found is guaranteed to be the (lexicographically)
227 In order to be able to determine the smallest increment, the tableau
228 is (implicitly) extended with extra rows defining the original
229 variables in terms of the column variables.
231 that the zero vector is no greater than the minimal solution and
232 then the initial extended tableau looks as follows.
247 and will remain so because of the column choice explained above.
248 It is then clear that the value of $\vec x$ will increase in each step.
249 Note that there is no need to store the extra rows explicitly.
250 If a given $x_i$ is a column variable, then the corresponding row
251 is the unit vector $e_i$. If, on the other hand, it is a row variable,
252 then the row already appears somewhere else in the tableau.
254 In case of parametric problems, the sign of the constant term
255 may depend on the parameters. Each time the constant term of a constraint row
256 changes, we therefore need to check whether the new term can attain
257 negative and/or positive values over the current set of possible
258 parameter values, i.e., the context.
259 If all these terms can only attain non-negative values, the current
260 state of the tableau represents a solution. If one of the terms
262 the corresponding row can be pivoted.
263 Otherwise, we pick one of the terms that can attain both positive
264 and negative values and split the context into a part where
270 The solution found by the dual simplex method may have
272 (including the current sample value), can be cut off by
274 Let $r = b(\vec p) + \sp {\vec a} {\vec c}$ be the row
275 corresponding to the first non-integral coordinate of $\vec x$,
276 with $b(\vec p)$ the constant term, an affine expression in the
279 non-integral values as the sample value of the column variables is zero.
280 Consider the expression
282 with $\ceil\cdot$ the ceiling function and $\fract\cdot$ the
283 fractional part. This expression is negative at the sample value
285 $\ceil{b(\vec p)} > b(\vec p)$. On the other hand, for each integral
286 value of $r$ and $\vec c \ge 0$, the expression is non-negative
289 invalidate any integral solutions, while it does cut away the current
292 to the context. This integral variable is uniquely defined by the constraints
293 $0 \le -d \, b(\vec p) - d \, q \le d - 1$, with $d$ the common
294 denominator of $\vec f$ and $g$. In practice, the variable
296 and the coefficients of the new constraint are adjusted accordingly.
297 The sign of the constant term of this new constraint need not be determined
305 There are two places in the above algorithm where the unknowns $\vec x$
306 are assumed to be non-negative: the initial tableau starts from
308 during the construction of Gomory cuts.
311 an arbitrarily large positive number. Instead of looking for the
313 for the lexicographically minimal value of $\vec x' = \vec M + \vec x$.
314 The sample value $\vec x' = \vec 0$ of the initial tableau then
316 any potential solution. The sign of the constant term of a row
317 is determined lexicographically, with the coefficient of $M$ considered
318 first. That is, if the coefficient of $M$ is not zero, then its sign
319 is the sign of the entire term. Otherwise, the sign is determined
320 by the remaining affine expression in the parameters.
321 If the original problem has a bounded optimum, then the final sample
322 value will be of the form $\vec M + \vec v$ and the optimal value
323 of the original problem is then $\vec v$.
325 the minimum of $\vec M - \vec x$.
327 When the optimum is unbounded, the optimal value computed for
328 the original problem will involve the big parameter.
329 In the original implementation of {\tt PipLib}, the big parameter could
330 even appear in some of the extra variables $\vec q$ created during
331 the application of a Gomory cut. The final result could then contain
332 implicit conditions on the big parameter through conditions on such
338 indicates an incorrect formulation of the problem.
340 The original version of {\tt PipLib} required the user to ``manually''
341 add a big parameter, perform the reformulation and interpret the result
342 \shortcite{Feautrier02}. Recent versions allow the user to simply
343 specify that the unknowns may be negative or that the maximum should
347 where it is useful to have explicit control over the big parameter,
348 negative unknowns and maximization are by far the most common applications
349 of the big parameter and we believe that the user should not be bothered
352 provide any interface for specifying big parameters. Instead, the user
354 are made on the sign of the unknowns. Instead, the sign of the unknowns
356 needed. For compatibility with {\tt PipLib}, the {\tt isl\_pip} tool
357 does explicitly add non-negativity constraints on the unknowns unless
358 the \verb+Urs_unknowns+ option is specified.
360 parameter in the output. Even though
361 {\tt isl} makes the same divisibility assumption on the big parameter
363 produce an error if the problem turns out to be unbounded.
368 or can be applied in advance to reduce the running time
369 of the actual dual simplex method with Gomory cuts.
373 Experience with the original {\tt PipLib} has shown that Gomory cuts
377 the original problem considered as a non-parametric problem
378 over the combined space of unknowns and parameters.
379 In fact, we do not simply check the feasibility, but we also
380 check for implicit equalities among the integer points by computing
381 the integer affine hull. The algorithm used is the same as that
383 Computing the affine hull is fairly expensive, but it can
384 bring huge benefits if any equalities can be found or if the problem
389 If the coefficients of the unknown and parameters in a constraint
391 rounding down the constant term. For example, the constraint
395 on the input.
399 If there are any (explicit) equalities in the input description,
402 \shortcite{Feautrier02}, but it is even better to \emph{exploit} the
403 equalities to reduce the dimensionality of the problem.
405 the row corresponding to the equality with the column corresponding
406 to the last unknown with non-zero coefficient. The new column variable
408 thereby reducing the dimensionality of the problem by one.
409 The last unknown is chosen to ensure that the columns of the initial
411 the equality is of the form $b + \sum_{i \le j} a_i \, x_i = 0$ with
412 $a_j \ne 0$, then the (implicit) top rows of the initial tableau
444 in a similar way, provided at least one of the coefficients is equal to one.
446 would obviate the need for removing parametric equalities.
455 of the context and this context may have to be split for each
456 of the constraints. In the worst case, the basic algorithm may
457 have to consider all possible orderings of the constant terms.
459 replaces the collection of constraints by the single
462 Any solution to the new system is also a solution
463 to the original system since
465 Conversely, $m = \min_i b_i(\vec p)$ satisfies the constraints
466 on $u$ and therefore extends a solution to the new system.
471 that are explicitly available in the input.
472 See \autoref{s:online} for the detection
473 and exploitation of symmetries that appear during the course of
474 the dual simplex method.
478 It may in some cases be apparent from the equalities in the problem
480 of the parameters. In such cases ``parameter compression''
482 the parameters by alternative ``dense'' parameters.
483 For example, if there is a constraint $2x = n$, then the system
485 by $2n'$. Similarly, the parameters $n$ and $m$ in a system with
486 the constraint $2n = 3m$ can be replaced by a single parameter $n'$
488 It is also possible to perform a similar compression on the unknowns,
489 but it would be more complicated as the compression would have to
490 preserve the lexicographical order. Moreover, due to our handling
499 Each internal node in this tree corresponds to a split of the context
500 based on a parametric constant term in the main tableau with indeterminate
501 sign. Each of these nodes may introduce extra variables in the context
502 corresponding to integer divisions. Each leaf of the tree prescribes
503 the solution in that part of the context that satisfies all the conditions
504 on the path leading to the leaf.
505 Such a quast is a very economical way of representing the solution, but
506 it would not be suitable as the (only) internal representation of
508 the constraints of a set or relation in disjunctive normal form.
510 converted to this internal representation. Unfortunately, the conversion
511 to disjunctive normal form can lead to an explosion of the size
512 of the representation.
518 representations and on allowing the output of {\tt isl\_pip} to optionally
522 for expressions such as $\min_i b_i(\vec p)$ from the offline
525 If the expression
526 does not appear in the affine expression describing the solution,
527 but only in the constraints, and if moreover, the expression
530 can simply be reduplicated $n$ times, once for each of the bounds.
537 by a factor of $n$, not detecting the symmetry could lead to
544 on the sign of an affine expression over the elements of the context.
546 problems, one with a constraint added to the context that enforces
547 the expression to be non-negative and one where the expression is
549 any integer linear feasibility solver could be used, but the {\tt PipLib}
550 implementation uses a recursive call to the dual simplex with Gomory
551 cuts algorithm to determine the feasibility of a context.
552 In {\tt isl}, two ways of handling the context have been implemented,
553 one that performs the recursive call and one, used by default, that
555 We start with some optimizations that are shared between the two
561 they will not only say whether a set is empty or not, but if the set
563 i.e., a point that belongs to the set. By maintaining a list of such
564 witnesses, we can avoid many feasibility tests during the determination
565 of the signs of affine expressions. In particular, if the expression
568 If all the evaluations are non-negative, we only need to check for the
570 non-positive evaluations. Finally, in the rare case that all points
571 evaluate to zero or at the start, when no points have been collected yet,
572 one or two feasibility tests need to be performed depending on the result
573 of the first test.
575 When a new constraint is added to the context, the points that
576 violate the constraint are temporarily removed. They are reconsidered
577 when we backtrack over the addition of the constraint, as they will
578 satisfy the negation of the constraint. It is only when we backtrack
579 over the addition of the points that they are finally removed completely.
580 When an extra integer division is added to the context,
581 the new coordinates of the
582 witnesses can easily be computed by evaluating the integer division.
588 but there are rows with an indeterminate sign, then the context
589 needs to be split along the constant term of one of these rows.
591 to split on first. {\tt PipLib} uses a heuristic based on the (absolute)
592 sizes of the coefficients. In particular, it takes the largest coefficient
593 of each row and then selects the row where this largest coefficient is smaller
594 than those of the other rows.
597 term implies non-negativity of as many of the constant terms of the other
598 rows as possible. The intuition behind this heuristic is that on the
600 while on the negative side, we need to perform a pivot, which may
601 affect any number of rows meaning that the effect on the signs
603 expensive to evaluate than the heuristic used by {\tt PipLib}.
604 More extensive tests are needed to evaluate whether the heuristic is worthwhile.
608 When a new constraint is added to the context, the first steps
609 of the dual simplex method applied to this new context will be the same
610 or at least very similar to those taken on the original context, i.e.,
611 before the constraint was added. In {\tt isl}, we therefore apply
612 the dual simplex method incrementally on the context and backtrack
615 keep the Gomory cuts, but the current implementation backtracks
616 to before the point where Gomory cuts are added before adding
617 an extra constraint to the context.
618 Keeping the Gomory cuts has the advantage that the sample value
620 the new constraint. However, due to the technique of maintaining
623 the previously added cuts may be redundant, possibly resulting
626 If the parameters may be negative, then the same big parameter trick
627 used in the main tableau is applied to the context. This big parameter
628 is of course unrelated to the big parameter from the main tableau.
631 In {\tt PipLib}, the extra parameter is not ``big'', but this may be because
632 the big parameter of the main tableau also appears
633 in the context tableau.
639 in the context tableau. Based on this report,
640 the initial {\tt isl} implementation was adapted accordingly.
646 This algorithm is also used in the {\tt barvinok} implementation.
648 We therefore try to avoid calling the algorithm in easy cases.
650 the entire unit hypercube positioned at that point lies in the context.
651 This set is described by translates of the constraints of the context
653 in the set can be rounded up to yield an integer point in the context.
655 A restriction of the algorithm is that it only works on bounded sets.
656 The affine hull of the recession cone therefore needs to be projected
657 out first. As soon as the algorithm is invoked, we then also
659 found by one call of the algorithm is also reused as initial basis
660 for the next call.
662 Some problems lead to the
665 if the expressions are not identical, or they may be equal to some
667 To detect such cases, we compute the affine hull of the context
670 while the points used in this algorithm are obtained by performing
671 integer feasibility checks on that part of the context outside
672 the current approximation of the affine hull.
674 of the hull, while any extra points found during the construction
675 of the hull is added to this list.
678 propagated to the main tableau, where it is used to eliminate that
683 \autoref{t:comparison} compares the execution times of {\tt isl}
687 Easier problems such as the
690 and not the running time of the actual algorithm.
691 We compare the following versions:
697 The first test case is the following dependence analysis problem
698 originating from the Phideo project \shortcite{Verhaegh1995PhD}
703 This problem was the main inspiration
704 for some of the optimizations in \autoref{s:GBR}.
708 were used to drive the first, Gomory cuts based, implementation
711 \shortciteN{Bygde2010licentiate} and inspired the offline symmetry detection
712 of \autoref{s:offline}. Without symmetry detection, the running times
715 to disjunctive normal form. Without this conversion, the final two
718 sometimes report the problem to be too complex (TC), while on some other
723 minutes on the first problem. The gbr version introduces some
724 overhead on some of the easier problems, but is overall the clear winner.
757 Manual experiments on small instances of the problems of
758 \shortciteN{Bygde2010licentiate} and an analysis of the results
759 by the approximate MPA method developed by \shortciteN{Bygde2010licentiate}
761 than can be detected using the offline method of \autoref{s:offline}.
767 symmetry detection. At some point, one of the
769 say the $j$th constraint, appears as a column
770 variable, say $c_1$, while the other constraints are represented
771 as rows of the form $b_i(\vec p) - b_j(\vec p) + c$.
772 The context is then split according to the relative order of
773 $b_j(\vec p)$ and one of the remaining $b_i(\vec p)$.
775 by a single newly introduced parameter that represents the minimum
777 In the online method the split is similarly avoided by the introduction
784 of the tableau such that the sign of $b(\vec p)$ is indeterminate
785 and such that exactly one of the elements of $\vec a$ is a $1$,
790 the column variable $c_j$ by $c' + t$. The row $r$ is now equal
798 Given a valid solution for the original problem, we need to find
799 a non-negative value of $c'$ satisfying the constraints.
807 Conversely, given a solution to the new problem, we need to find
812 \autoref{s:post}, but, as in the case of offline symmetry detection,
814 expressions in the internal representation of sets and relations
844 then the transitive closure $R^+$ of $R$ is the union
851 Alternatively, the transitive closure may be defined
859 Since the transitive closure of a polyhedral relation
861 we can, in the general case, only compute an approximation
862 of the transitive closure.
867 to be as close as possible to the actual transitive closure
868 $R^+$ and we want to detect the cases where the approximation is
871 For computing an approximation of the transitive closure of $R$,
872 we follow the same general strategy as \shortciteN{Beletska2009}
874 out the parameter $k$ from the resulting relation.
877 As a trivial example, consider the relation
898 There are some special cases where the computation of $R^k$ is very easy.
907 As a first approximations, we will consider each of the basic
909 to arrive at an image element and ignore the fact that some of these
910 offsets may only be applied to some of the domain elements.
911 That is, we will only consider the difference set $\Delta\,R$ of the relation.
932 and with $\Delta_i$ the basic sets that compose
933 the difference set $\Delta\,R$.
934 Note that the number of basic sets $\Delta_i$ need not be
935 the same as the number of basic relations in $R$.
937 matter in which order we add the offsets and so we are allowed
940 If all the $\Delta_i$s are singleton sets
952 and then the approximation computed in \eqref{eq:transitive:approx}
953 is essentially the same as that of \shortciteN{Beletska2009}.
954 If some of the $\Delta_i$s are not singleton sets or if
958 To ease both the exposition and the implementation, we will for
959 the remainder of this section work with extended offsets
963 offsets have the length encoded as the difference of their
987 We therefore need to take the union with the identity relation
988 when composing the $P_i'$s to allow for paths that do not contain
991 by imposing the constraint $y_{d+1} - x_{d+1} > 0$.
992 Taking the union with the identity relation means that
993 that the relations we compose in \eqref{eq:transitive:decompose}
995 disjuncts in the input relation, then a direct application
996 of the composition operation may therefore result in a relation
1008 We will return to existentially quantified variables at the end
1011 the constraints of $\Delta_i'$ as follows
1045 We will use the following approximation $Q_i$ for $P_i'$:
1070 $(\vec f, k)$ satisfies the constraints in \eqref{eq:transitive:Q}.
1072 the constraints in \eqref{eq:transitive:parametric}.
1075 Each of these elements satisfies the constraints in
1091 The sum of these elements therefore satisfies the same set of inequalities,
1093 Finally, the constraints in \eqref{eq:transitive:mixed} are such
1094 that for any $\vec s$ in the parameter domain of $\Delta$,
1098 Note that if there are no mixed constraints and if the
1101 has integer vertices, then the approximation is exact, i.e.,
1102 $Q_i = P_i'$. In this case, the vertices of $\Delta'_i(\vec s)$
1103 generate the rational cone
1112 if there \emph{are} any mixed constraints, then the above procedure may
1113 not compute the most accurate affine approximation of
1115 In particular, we only consider the negative mixed constraints that
1116 happen to appear in the description of $\Delta_i(\vec s)$, while we
1127 We then have $k \, \spv a x < \spv a x$ for $k > 1$ and so the constraint
1132 constraint of $\Delta_i(\vec s)$. That is, $(\vec b, c)$ are the opposites
1133 of the coefficients of a valid constraint of $\Delta_i(\vec s)$.
1135 using three applications of Farkas' lemma. The first obtains the coefficients
1137 the coefficients of constraints valid for the projection of $\Delta_i(\vec s)$
1138 onto the parameters. The opposite of the second set is then computed
1139 and intersected with the first set. The result is the set of coefficients
1141 of Farkas' lemma is needed to obtain the approximation of
1145 Consider the relation
1155 Using our approach, we would only consider the mixed constraint
1156 $y - 1 + n \ge 0$, leading to the following approximation of the
1204 Finally, the computed approximation for $R^+$,
1220 determined by the parameters, variables that are independent
1221 of the parameters and others. The first set can be treated
1222 as parameters and the second as variables. Constraints involving
1223 the other existentially quantified variables are removed.
1226 Consider the relation
1239 of the parameters and variables as
1251 a purely parametric constraint, while the other two constraints are
1269 Projecting out the final coordinates encoding the length of the paths,
1270 results in the exact transitive closure
1280 by not considering the parameters in a special way.
1281 That is, instead of considering the set
1289 we consider the set
1300 to projecting out the parameters in $\Delta$.
1303 all be used in the construction of $Q_i$.
1306 Consider the relation
1316 and so, by treating the parameters in a special way, we obtain
1317 the following approximation for $R^+$:
1330 and we obtain the approximation
1340 Note, however, that this is not the most accurate affine approximation that
1350 The approximation $T$ for the transitive closure $R^+$ can be obtained
1351 by projecting out the parameter $k$ from the approximation $K$
1352 \eqref{eq:transitive:approx} of the power $R^k$.
1355 To check whether the results are exact, we need to consider two
1359 If $R$ is acyclic, then the inductive definition of
1376 If, on the other hand, $R$ is cyclic, then we have to resort
1377 to checking whether the approximation $K$ of the power is exact.
1378 Note that $T$ may be exact even if $K$ is not exact, so the check
1380 To check exactness of the power, we simply need to check
1395 All that remains is to explain how to check the cyclicity of $R$.
1396 Note that the exactness on the power is always sound, even
1397 in the acyclic case, so we only need to be careful that we find
1404 In the implementation we currently perform this test on $P'$ instead of $K'$.
1405 Note that if $R^+$ is acyclic and $T$ is not, then the approximation
1406 is clearly not exact and the approximation of the power $K$
1411 If the input relation $R$ is a union of several basic relations
1413 then the accuracy of the approximation may be improved by computing
1425 Note, however, that the condition $R_1 \circ R_2 = \emptyset$
1428 then we can reorder the segments
1433 of more than two basic relations by constructing the
1434 strongly connected components in the graph with as vertices
1435 the basic relations and an edge between two basic relations
1445 The components can be obtained from the graph by applying
1448 In practice, we compute the (extended) powers $K_i'$ of each component
1450 Note, however, that in this case the order in which we apply them is
1451 important and should correspond to a topological ordering of the
1454 The graph on which Tarjan's algorithm is applied is constructed on-the-fly.
1455 That is, whenever the algorithm checks if there is an edge between
1458 If the approximation turns out to be inexact for any of the components,
1459 then the entire result is marked inexact and the exactness check
1460 is skipped on the components that still need to be handled.
1464 If overapproximations are computed in the right hand side, then the result will
1465 still be an overapproximation of the left hand side, but this result
1467 on the condition $R_i \circ R_j = \emptyset$, then there is no problem,
1468 as this condition will still hold on the computed approximations
1469 of the transitive closures. If, however, we have exploited
1470 \eqref{eq:transitive:edge} during the decomposition and if the
1472 the result is transitively closed. If not, we recompute
1473 the transitive closure, skipping the decomposition.
1474 Note that testing for transitive closedness on the result may
1498 Consider the relation in example {\tt closure4} that comes with
1499 the Omega calculator~\shortcite{Omega_calc}, $R = R_1 \cup R_2$,
1578 Consider the relation on the right of \shortciteN[Figure~2]{Beletska2009},
1609 which the reader can verify using the {\tt iscc} calculator:
1621 For the other two basic relations, we have both
1626 By computing the power of $R_3$ and $R_1 \cup R_2$ separately
1627 and composing the results, the power of $R$ can be computed exactly
1629 As explained by \shortciteN{Beletska2009}, applying the same formula
1631 an overapproximation of the power.
1634 \subsection{Partitioning the domains and ranges of $R$}
1636 The algorithm of \autoref{s:power} assumes that the input relation $R$
1639 abstract domain to the same domain.
1647 be applied to the dependence graph, as advocated by
1648 \shortciteN{Kelly1996closure}, with the transitive closure operation
1662 $R_{pq}$ contains all indirect paths from $p$ to $q$ in the input graph}
1682 Let the input relation $R$ be a union of $m$ basic relations $R_i$.
1683 Let $D_{2i}$ be the domains of $R_i$ and $D_{2i+1}$ the ranges of $R_i$.
1685 obtained. If the resulting partition consists of a single part,
1686 then we continue with the algorithm of \autoref{s:power}.
1687 Otherwise, we apply Floyd-Warshall on the graph with as vertices
1688 the parts of the partition and as edges the $R_i$ attached to
1689 the appropriate pairs of vertices.
1690 In particular, let there be $n$ parts $P_k$ in the partition.
1697 apply \autoref{a:Floyd} and return the union of all resulting
1698 $R_{pq}$ as the transitive closure of $R$.
1699 Each iteration of the $r$-loop in \autoref{a:Floyd} updates
1704 accordingly in previous iterations of the outer loop.
1705 In principle, it would be sufficient to use the $R_{pr}$
1706 and $R_{rq}$ computed in the previous iteration of the
1710 in the same iteration of the $r$-loop.
1712 be removed by coalescing (\autoref{s:coalescing}) the result of the union
1716 includes the partitioning step, but the resulting partition will
1718 The result of the recursive call will either be exact or an
1739 \caption{The relation (solid arrows) on the right of Figure~1 of
1744 Consider the relation on the right of Figure~1 of
1755 Note that the domain of the upward relation overlaps with the range
1756 of the rightward relation and vice versa, but that the domain
1757 of neither relation overlaps with its own range or the domain of
1758 the other relation.
1760 $P_0$ and $P_1$, shown as the white and black dots in \autoref{f:COCOA:1},
1778 In the first iteration, $R_{00}$ remains the same ($\emptyset^+ = \emptyset$).
1781 the dashed arrow in the figure.
1783 changed in the second iteration and it does not have an effect
1785 include $R_{10} \circ R_{01}$, i.e., the dotted arrow in the figure.
1786 The transitive closure of the original relation is then equal to
1793 In some cases it is possible and useful to compute the transitive closure
1800 then we can pick some $R_i$ and compute the transitive closure of $R$ as
1811 of the disjuncts in the argument of the second transitive
1815 the number of disjuncts in the argument of the transitive closure
1818 to relax the constraints of $R_i^+$ to include part of the identity relation,
1819 say on domain $D$. We will use the notation
1822 \shortciteN{Kelly1996closure} use the notation $R_i^?$.
1824 the value $0$ in \eqref{eq:transitive:Q} and by using
1831 and range of the transitive closure are part of ${\cal C}(R_i,D)$,
1832 i.e., the part that results from the paths of positive length ($k \ge 1$),
1833 are equal to the domain and range of $R_i$.
1834 If not, then the incremental approach cannot be applied for
1835 the given choice of $R_i$ and $D$.
1847 they are using the convex hull of $\domain R \cup \range R$
1849 We use the simple hull (\autoref{s:simple hull}) of $\domain R \cup \range R$.
1948 It should be noted that if we want the result of the incremental
1950 if all of the transitive closure operations involved are exact.
1951 If, say, the second transitive closure in \eqref{eq:transitive:incremental}
1952 contains extra elements, then the result does not necessarily contain
1953 the composition of these extra elements with powers of $R_i$.
1957 While the main algorithm of \shortciteN{Kelly1996closure} is
1958 designed to compute and underapproximation of the transitive closure,
1959 the authors mention that they could also compute overapproximations.
1962 Note that the {\tt Omega} library computes underapproximations
1975 where $p$ ranges over the dimensions and $\vec L$, $\vec U$ and
1980 lower and upper bounds on the difference set $\Delta \, R$.
1993 intersected with those of the input relation.
1994 This is a special case of the algorithm in \autoref{s:power}.
1996 In their algorithm for computing lower bounds, the authors
1997 use the above algorithm as a substep on the disjuncts in the relation.
1998 At the end, they say
2003 Presumably, the authors mean that a ``d-form'' approximation
2004 of the whole input relation should be used.
2005 However, the accuracy can be improved by also trying to
2006 apply the incremental technique from the same paper,
2009 allowing the value zero for $k$ in \eqref{eq:omega},
2020 In our implementation we take as $D$ the simple hull
2023 we check the following conditions, as proposed by
2026 the condition