Abstract
Using the recently proposed maximal quadratic-free sets and the well-known monoidal strengthening procedure, we show how to improve intersection cuts for quadratically-constrained optimization problems by exploiting integrality requirements. We provide an explicit construction that allows an efficient implementation of the strengthened cuts along with computational results showing their improvements over the standard intersection cuts. We also show that, in our setting, there is unique lifting which implies that our strengthening procedure is generating the best possible cut coefficients for the integer variables.





Similar content being viewed by others
Notes
This assumption discards the application of the monoidal strengthening developed in this paper for quadratic constraints coming from the epigraph reformulation of the objective function.
With a slight abuse of notation, we refer to a non-convex set \(C - M\) as S-free whenever the convex set \(C - m\) is S-free for every \(m \in M\).
A function \(\psi \) is subadditive if \(\psi (x+y) \le \psi (x) + \psi (y)\).
A cut generating function is called minimal if it is not point-wise dominated by another cut generating function.
The function \(t \in \mathbb {R} \mapsto \phi (td)\) is a convex function that is non-positive and 0 at 0. The only such function is 0.
This citation deals with a particular set S, but the proof can be easily extended to any conic set S.
Note that \(S^{g}\) is contained on a hyperplane, so \(S^{g}\)-freeness is with respect to the induced topology in \(H\).
As mentioned earlier, this means that there are no purely linear variables; this prevents us from applying our monoidal strengthening to quadratic constraints of the form \(f(s)\le t\), corresponding to an epigraph reformulation of a quadratic objective function f(s).
Due to Remark 2.1, we can work with the projection of \((\hat{x}(s), 1)\) onto \(\langle \{ \lambda , a \} \rangle \times \mathbb {R}^m\).
We note that the statement of Theorem 17.3 in [20] is missing the hypothesis that no inequality in the description of the convex set is of the form \(0\cdot x\le 0\). If this hypothesis does not hold, the result is not true in general. However, in our setting, all the inequalities on the description of \(K_e\) have a positive right-hand side.
References
Balas, E.: Intersection cuts—a new type of cutting planes for integer programming. Oper. Res. 19(1), 19–39 (1971)
Balas, E., Jeroslow, R.G.: Strengthening cuts for mixed integer programs. Eur. J. Oper. Res. 4(4), 224–234 (1980)
Basu, A., Campelo, M., Conforti, M., Cornuéjols, G., Zambelli, G.: Unique lifting of integer variables in minimal inequalities. Math. Program. 141(1–2), 561–576 (2012)
Basu, A., Cornuéjols, G., Zambelli, G.: Convex sets and minimal sublinear functions. J. Convex Anal. 18(2), 427–432 (2011)
Basu, A., Dey, S.S., Paat, J.: Nonunique lifting of integer variables in minimal inequalities. SIAM J. Discrete Math. 33(2), 755–783 (2019)
Bestuzheva, K., Besançon, M., Chen, W.-K., Chmiela, A., Donkiewicz, T., van Doornmalen, J., Eifler, L., Gaul, O., Gamrath, G., Gleixner, A., Gottwald, L., Graczyk, C., Halbig, K., Hoen, A., Hojny, C., van der Hulst, R., Koch, T., Lübbecke, M., Maher, S.J., Matter, F., Mühmer, E., Müller, B., Pfetsch, M.E., Rehfeldt, D., Schlein, S., Schlösser, F., Serrano, F., Shinano, Y., Sofranac, B., Turner, M., Vigerske, S., Wegscheider, F., Wellner, P., Weninger, D., Witzig, J.: The SCIP Optimization Suite 8.0. ZIB-Report 21-41, Zuse Institute Berlin (2021)
Bienstock, D., Chen, C., Munoz, G.: Outer-product-free sets for polynomial optimization and oracle-based cuts. Math. Program. 183, 1–44 (2020)
Chmiela, A., Muñoz, G., Serrano, F.: On the implementation and strengthening of intersection cuts for QCQPs. Math. Program. 197, 1–38 (2022)
Chmiela, A., Muñoz, G., Serrano, F.: Monoidal strengthening and unique lifting in MIQCPs. In: Integer Programming and Combinatorial Optimization: 24th International Conference, IPCO 2023, Madison, WI, USA, June 21–23, 2023, Proceedings, pp. 87–99. Springer, Cham (2023)
Conforti, M., Cornuéjols, G., Daniilidis, A., Lemaréchal, C., Malick, J.: Cut-generating functions and S-free sets. Math. Oper. Res. 40(2), 276–391 (2015)
Conforti, M., Cornuéjols, G., Zambelli, G.: A geometric perspective on lifting. Oper. Res. 59(3), 569–577 (2011)
Dey, S.S., Wolsey, L.A.: Constrained infinite group relaxations of MIPs. SIAM J. Optim. 20(6), 2890–2912 (2010)
Dey, S.S., Wolsey, L.A.: Two row mixed-integer cuts via lifting. Math. Program. 124(1–2), 143–174 (2010)
Fukasawa, R., Poirrier, L., Xavier, Á.S.: The (not so) trivial lifting in two dimensions. Math. Program. Comput. 11(2), 211–235 (2018)
Furini, F., Traversi, E., Belotti, P., Frangioni, A., Gleixner, A., Gould, N., Liberti, L., Lodi, A., Misener, R., Mittelmann, H., Sahinidis, N.V.: QPLIB: a library of quadratic programming instances. Program. Comput. 11, 237–265 (2018)
Glover, F.: Convexity cuts and cut search. Oper. Res. 21(1), 123–134 (1973)
Gomory, R.: An algorithm for the mixed integer problem. Technical report, RAND CORP SANTA MONICA CA (1960)
MINLP library (2010). http://www.minlplib.org/
Muñoz, G., Serrano, F.: Maximal quadratic-free sets. Math. Program. 192, 1–42 (2021)
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)
Tuy, H.: Concave programming with linear constraints. In: Doklady Akademii Nauk, vol. 159, pp. 32–35. Russian Academy of Sciences (1964)
Zaffaroni, A.: Convex radiant costarshaped sets and the least sublinear gauge. J. Convex Anal. 20(2), 307–328 (2013)
Acknowledgements
The authors would like to thank the two anonymous reviewers for their insightful comments that resulted in significant improvements of this article. We would like to express special gratitude toward Reviewer 1 for suggesting a significantly shorter proof for Theorem 4.1.
Funding
G.M. received financial support from the Chilean National Agency for Research and Development (ANID) through the FONDECYT Grant No. 1231522 and PIA/PUENTE AFB230002.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A short version of this article was accepted for publication at IPCO 2023 [9]. This extended version contains more detailed proofs, additional results, and extended discussions.
Missing proofs
Missing proofs
1.1 Proof of Theorem 4.1: the set M is a monoid
As the title suggests, the goal of this section is to prove that the set M as defined in (8) is indeed a monoid, under Assumption 1.
Proof
(Theorem 4.1) Due to the way this proof was originally developed, it will be slightly more convenient to show that \(-M\) is a monoid, which is an equivalent statement.
To show that \(-M\) is a monoid, we take two vectors \((x_i, y_i) \in -M\), \(i=1,2\) and show that their sum is in \(-M\). If one of them is the origin, the result follows trivially, thus we assume \((x_i, y_i) \ne 0\). Let us recall the definition of \(-M\) without the origin:
where \(a,\lambda ,d\) satisfy Assumption 1, i.e., \(\Vert d\Vert< -\lambda ^\textsf{T}a < 1\) and \(\Vert \lambda \Vert =\Vert a\Vert = 1\). The linear constraints in (A.1) are satisfied trivially by the sum, hence in the following, we will focus on showing that \(\Vert x_1 + x_2 + x_0 \Vert \ge \Vert y_1 + y_2\Vert \). We prove this by showing that the optimization problem
is non-negative. First, we expand the objective function:
where the last inequality follows from the constraints \(\Vert x_i + x_0\Vert \ge \Vert y_i\Vert \) and \((a - \lambda ^\textsf{T}a \lambda )^\textsf{T}x_i \ge 1 \Leftrightarrow -x_0^\textsf{T}x_i \ge \Vert x_0\Vert ^2\). Hence, showing that the above optimization problem is non-negative is equivalent to proving that

is bounded by \(\frac{1}{2} \Vert x_0\Vert ^2\).
We can always decompose \(y_i = \omega _i d + \rho _i\) where \(\rho _i\) is orthogonal to d. Furthermore, since \(x_i \in \langle \{a, \lambda \} \rangle \), \(x_i\) can be represented in terms of a and \(\lambda \), i.e., \(x_i = \theta _i a + \eta _i \lambda \). In the following, we will use the expansions of \(x_i\) and \(y_i\) to reformulate (P). Using that \(\Vert a\Vert =1\), the hyperplane in \(-M_{\ne 0}\) becomes
Furthermore, noting that \(\lambda ^\textsf{T}x_0 = 0\) and \(a^\textsf{T}x_0 = -1\), we get
and the nonlinear constraint in (A.1) expands to
Finally, replacing \(x_i\) and \(y_i\) in the objective of (P) yields
To summarize, problem (P) can be reformulated as

In what follows, we consider a relaxed version of this problem by removing constraints \(\rho _i^\textsf{T}d = 0\), which leaves us with the problem:

and thus it suffices to show that the value of this optimization problem is \(\ge \frac{1}{2} \Vert x_0\Vert ^2\) to prove that the given set is a monoid.
Let us label the constraints in (\(P_{exp}\)):
Notice that (A.3) implies \(-2 \theta _i \le - 2 \Vert x_0\Vert ^2\). Using this in (A.2) gives us the following weaker constraint
Note that
Using this inequality, we can lower bound the objective and relax (A.2’) using \(\Vert \rho _i\Vert \ge 0\) to obtain the following relaxation of (\(P_{exp}\)):
We note that the main role of the first inequality is to guarantee the square roots in the objective are well defined.
Our first task will be to show that \(\eta _i\ge 0\). Indeed, for \(\Vert d \Vert = 0\), (A.5d) is equivalent to \(\theta _i = - \eta _i \lambda ^\textsf{T}a\). Since \(\theta _i \ge 0\) by (A.5c) and \(- \lambda ^\textsf{T}a\ge 0\) by Assumption 1, \(\eta _i \ge 0\) follows. On the other hand, if \(\Vert d \Vert > 0\), (A.5d) is equivalent to \(\omega _i = - \frac{\theta _i + \eta _i \lambda ^\textsf{T}a}{\Vert d \Vert ^2}\). Replacing \(\omega _i\) in (A.5b) yields
where the last inequality follows from noting that that both \(1 - \frac{1}{\Vert d \Vert ^2}\) and \(1 - \frac{\lambda ^\textsf{T}a^2}{\Vert d \Vert ^2}\) are negative under Assumption 1. The factor \(2 (1 - 1/\Vert d \Vert ^2) \theta _i \lambda ^\textsf{T}a\) is positive since (A.5c) and \(\Vert x_0\Vert > 0\) implies that \(\theta _i > 0\), and \(\lambda ^\textsf{T}a< 0\) by assumption. It follows then that \(\eta _i \ge 0\).
Now we analyze the objective function (A.5a). Substituting \(\theta _i = - \Vert d \Vert ^2 \omega _i - \eta _i \lambda ^\textsf{T}a\) and rearraging terms yields
We claim that the latter expression is exactly \(\Vert v_1 \Vert \Vert v_2\Vert - v_1^\textsf{T}v_2 \ge 0\) for appropriately-defined vectors \(v_1,v_2\). Note that this would conclude the proof, since it would show (A.5a) \(\ge \Vert x_0\Vert ^2\).
We define \(v_i{=}(\omega _i \Vert d \Vert \sqrt{1{-}\Vert d \Vert ^2}, \Vert x_0\Vert , \sqrt{\eta _i^2(1{-}\lambda ^\textsf{T}a^2){-}\omega _i^2\Vert d \Vert ^2(1{-}\Vert d \Vert ^2){-}\Vert x_0\Vert ^2}) \in \mathbb {R}^3\). Then,
We remark that the last equality follows since \(\eta _i\ge 0\). From these expressions one can directly verify the claim that (A.5a) \(- \Vert x_0\Vert ^2 = \Vert v_1 \Vert \Vert v_2\Vert - v_1^\textsf{T}v_2\). This concludes the proof. \(\square \)
1.2 The set \(C_\lambda \cap H- M\) is \(S^{g}\)-free
Recall that \(C_{\lambda }= \{ (x,y) \in \mathbb {R}^{n+m} \,:\,\Vert y\Vert \le \lambda ^\textsf{T}x \}\) and the apex of \(C_{\lambda }\cap H\), in the space \(\langle \{ \lambda , a \} \rangle \times \mathbb {R}^m\), is \((x_0,0)\) with
Note that \(\Vert x_0\Vert ^2 = \frac{1}{1 - \lambda ^\textsf{T}a^2}\). We begin with the following auxiliary result.
Lemma A.1
If \((x,y) \in C_\lambda \cap H\setminus \{(x_0, 0)\}\), then \(\Vert x_0\Vert ^2 - x_0^\textsf{T}x > 0\). Equivalently, \((a - \lambda ^\textsf{T}a\lambda )^\textsf{T}x > -1\).
Proof
Replacing \(x_0\) by its definition and using that \(1 - \lambda ^\textsf{T}a^2 > 0\) yields
Recall that \(\Vert d \Vert < -\lambda ^\textsf{T}a\) from Assumption 1. Since \((x, y) \in C_{\lambda }\) by assumption, we have \(\Vert y \Vert \le \lambda ^\textsf{T}x\), thus \(- \lambda ^\textsf{T}a\lambda ^\textsf{T}x > \Vert d\Vert \Vert y \Vert \). Also, \((x, y) \in H\) which we rewrite as \(a^\textsf{T}x = -1 - d^\textsf{T}y\). Then,
where the latter inequality follows from Cauchy-Schwarz. This concludes the proof. \(\square \)
Lemma A.2
The hyperplane \(-x_0^\textsf{T}x = 0\) defines a cross section of \(C_\lambda \cap H\), i.e., every point of \((x,y) \in C_\lambda \cap H\) can be written as \((x,y) = (x_0, 0) + \tau ((\bar{x},\bar{y}) - (x_0,0))\) where \((\bar{x},\bar{y}) \in C_\lambda \cap H\) with \(-x_0^\textsf{T}\bar{x} = 0\) and \(\tau \ge 0\).
Proof
Let \((x,y) \in C_{\lambda }\cap H\). If \((x,y) = (x_0,0)\) the result clearly holds. For the other cases, we define \(f(t) := (x_0, 0) + t ((x,y) - (x_0,0))\). We begin by noting that \(f(t)\in C_{\lambda }\cap H\) for \(t\ge 0\): indeed,
where we use that \((x,y)\in C_{\lambda }\) and \(\lambda ^\textsf{T}x_0 = 0\). Thus, if we are able to show that there exist a \(t^* > 0\) such that \(-(x_0,0) f(t^*) = 0\) we are done, since in that case
and we can define \(\tau = 1/t^*\) and \((\bar{x},\bar{y}) = f(t^*)\). Note that
therefore such \(t^*>0\) exists if and only if \(\Vert x_0\Vert ^2 - x_0^\textsf{T}x > 0\). However, the last inequality holds by Lemma A.1. \(\square \)
We are now ready to state the proof of Theorem 4.2, i.e., that \(C_\lambda \cap H- M\) is \(S^{g}\)-free.
Proof
(Theorem 4.2) To show that \(C_{\lambda }\cap H- M\) is \(S^{g}\)-free, we show that the translation \(C_{\lambda }\cap H+ m\) is \(S^{g}\)-free for all \(m \in - M\). Since this is clearly true for \(m = 0\), we assume that \(m \ne 0\).
Let us recall the definition of \(-M\) without the origin:
Let \((x,y) \in C_{\lambda }\cap H\) and \(m = (m_x, m_y) \in -M_{\ne 0}\), then we have to show that \((x,y) + m\) is not in the interior of \(S^{g}= \{ (x,y)\in \mathbb {R}^{n+m}\,:\, \Vert x\Vert \le \Vert y\Vert ,\, a^\textsf{T}x + d^\textsf{T}y = -1 \}\). As noted in Remark 2.1, we can restrict the analysis to \((x, y) \in \langle \{ \lambda , a \} \rangle \times \mathbb {R}^m\). Since the equality constraint is satisfied by \((x,y) + m\), we have to show that \(\Vert x + m_x\Vert \ge \Vert y + m_y\Vert \). Therefore, we would like to show that the problem

is non-negative.
By Lemma A.2, we know that for every point \((x,y) \in C_{\lambda }\cap H\), there exists \((\bar{x},\bar{y})\) with \(-x_0^\textsf{T}\bar{x}= 0\) and \(\tau \ge 0\) such that (x, y) can be written as \((x,y) = (x_0, 0) + \tau ((\bar{x}, \bar{y}) - (x_0, 0))\). Using this decomposition of (x, y) in (\(P_{\text {S-free}}\)), it follows that we can transform (\(P_{\text {S-free}}\)) to the equivalent problem:
Now, let us consider the following change of variable: \(\hat{x} = \bar{x}- x_0\). For the objective function, we then get
Moreover, noting that \((\bar{x}, \bar{y}) \in C_{\lambda }\) implies \(\Vert \hat{x} + x_0\Vert \ge \Vert \bar{y}\Vert \), the constraint \((\bar{x}, \bar{y}) \in C_{\lambda }\cap H\) is equivalent to \((\hat{x}, \bar{y}) \in \hat{M}\) where
Note that since \(x_0 = \frac{- a + \lambda ^\textsf{T}a\lambda }{1 - \lambda ^\textsf{T}a^2}\),
Therefore, it follows that \(\hat{M}\cap \{(x,y)\,:\, -x_0^\textsf{T}x = \Vert x_0 \Vert ^2\}\cap \left( \langle \{ \lambda , a \} \rangle \times \mathbb {R}^m\right) \) is fully contained in the monoid \(-M_{\ne 0}\). This implies that the problem (\(P_{\text {S-free}}\)) is equivalent to

Expanding the objective function gives us
Any feasible point for (\(\hat{P}_{\text {S-free}}\)) must satisfy \(\Vert x_0 + m_x\Vert ^2 - \Vert m_y \Vert ^2 \ge 0\) since \(m \in -M_{\ne 0}\). Furthermore, as the constraint \(\lambda ^\textsf{T}\hat{x} \ge \Vert \bar{y}\Vert \) implies \(\Vert \hat{x} \Vert \ge \Vert \bar{y}\Vert \) by Cauchy-Schwarz, it follows that the term \(\Vert \hat{x} \Vert ^2 - \Vert \bar{y}\Vert ^2\) is non-negative as well. Thus, we can lower bound the objective function of (\(\hat{P}_{\text {S-free}}\)) as follows:
where the equality follows since \(-x_0^\textsf{T}\hat{x} = \Vert x_0 \Vert ^2\) in (\(\hat{P}_{\text {S-free}}\)). Recall that our goal is showing (\(P_{\text {S-free}}\)) = (\(\hat{P}_{\text {S-free}}\)) \(\ge 0\). Using the above lower bound of the objective function, it follows that (\(P_{\text {S-free}}\)) \(\ge 0\) if the relaxation
is non-negative. In the proof of Theorem 4.1, we show that this is the case. It follows that the set \(C_{\lambda }\cap H- M\) is \(S^{g}\)-free. \(\square \)
1.3 Minimal representation of \(C - f\)
Here we prove Proposition 5.1, which we restate for convenience of the reader.
Proposition 5.1
The minimal representation of \(C - f\) is
In [4, 10], the authors characterize the minimal representation of a full-dimensional convex set with the origin in its interior. In [22], the author characterizes minimal representations of arbitrary convex sets. In our setting, we can only apply the characterization of [22]. However, this requires the computation of polars, reverse polars, and co-kernels of \(C - f\), see [22] for the definitions. Instead of using this path, we present a result that builds on the result of [4] and is enough to prove Proposition 5.1.
Lemma A.3
Let I be an arbitrary index set and let K be a convex set of the form
such that for each \(i \in I\) there exists an \(x^i \in K\) with \(a_i^\textsf{T}x^i = 1\), and \(\{a_i \,:\,i \in I\}\) is compact. Then,
is the minimal representation of K.
Proof
It is clear that \(K = \{x \in \mathbb {R}^n \,:\,\phi (x) \le 1\}\) and that \(\phi \) is sublinear, so \(\phi \) is a representation of K. Let \(\rho \) be any representation of K. It remains to show that \(\phi (x) \le \rho (x)\) for all \(x \in \mathbb {R}^n\).
Let k be the dimension of \(\ker (A)\), B be a basis of \(\ker (A)\), and consider the embedding of K, \(K_e = \{ z \in \mathbb {R}^k \,:\,a_i^\textsf{T}B z \le 1 \}\). Note that \(K_e = \{z \in \mathbb {R}^k \,:\,\rho (Bz) \le 1\}\) since \(\rho \) is a representation of K, i.e., \(\rho \circ B\) is a representation of \(K_e\).
We proceed to compute the minimal representation of \(K_e\) using [4, Theorem 1]. This theorem says that the minimal representation of \(K_e\) is the support function of \(\hat{K_e} = \{ y \in K_e^* \,:\,z^\textsf{T}y = 1 \text { for some } z \in K_e\}\), where \(K_e^* = \{ y \in \mathbb {R}^k \,:\,z^\textsf{T}y \le 1 \text { for all } z \in K_e\}\) is the polar of \(K_e\).
Given that \(\{a_i \,:\,i \in I\}\) is compact, the set \(\{(B^\textsf{T}a_i,1) \,:\,i \in I\}\) is compact. Also note that \(K_e\) is full-dimensional (\(0\in {{\,\textrm{int}\,}}(K_e)\)). By [20, Theorem 17.3]Footnote 10, we conclude that if \(\alpha ^\textsf{T}z \le \beta \) is valid for \(K_e\) and \(\alpha \ne 0\), then \(\alpha = \sum _{i \in J} h_i B^\textsf{T}a_i\) and \(\beta \ge \sum _{i \in J} h_i\), where \(J \subseteq I\) is finite.
Since \(0 \le 1\) is also valid for \(K_e\), we conclude that
Note that if \(y \in K_e^*\) is such that \(y = \sum _{i \in J} h_i B^\textsf{T}a_i + h_0 0\) with \(h_0 > 0\), then \(y \notin \hat{K_e}\). Therefore, \(\hat{K_e} \subseteq {{\,\textrm{conv}\,}}(\{B^\textsf{T}a_i\}_{i \in I})\). Furthermore, \(B^\textsf{T}a_i \in \hat{K_e}\) since there exists an \(z^i\) such that \(Bz^i = x^i\) and therefore \(a_i^\textsf{T}B z^i = 1\) by hypothesis. Thus, \(\{B^\textsf{T}a_i\}_{i \in I} \subseteq \hat{K_e} \subseteq {{\,\textrm{conv}\,}}(\{B^\textsf{T}a_i\}_{i \in I})\). Since the support function of a set S is equal to the support function of \({{\,\textrm{conv}\,}}(S)\). From the above we conclude that the minimal representation of \(K_e\) is
Now we can show that \(\phi (x) \le \rho (x)\) for all \(x \in \mathbb {R}^n\) where \(\rho \) be a representation of K. Let \(x_0 \in \mathbb {R}^n\). If \(x_0 \notin \ker (A)\), then \(\phi (x_0) = \rho (x_0) = + \infty \). So, let us assume that \(x_0 \in \ker (A)\) and let \(z_0\) be such that \(Bz_0 = x_0\). As we mentioned at the beginning of the proof, \(\rho \circ B\) is a representation of \(K_e\). Since \(\sigma \) is the minimal representation, \(\sigma (z_0) \le \rho (Bz_0)\). The above inequality is equivalent to \(\phi (x_0) \le \rho (x_0)\), which is what we wanted to prove. \(\square \)
Proof
(Proposition 5.1) We have that \(C - f = \{ (x,y) \in \langle \{ \lambda , a \} \rangle \times \mathbb {R}^m \,:\,\Vert y + f_y\Vert \le \lambda ^\textsf{T}( x + f_x), a^\textsf{T}x + d^\textsf{T}y = 0 \}\) or, equivalently, \(C - f = \{ (x,y) \in \langle \{ \lambda , a \} \rangle \times \mathbb {R}^m \,:\,\beta ^\textsf{T}(y + f_y) \le \lambda ^\textsf{T}(x + f_x) \ \forall \beta \in D^m, a^\textsf{T}x + d^\textsf{T}y = 0 \}\). Note that
The equivalence is correct given that \(\lambda ^\textsf{T}f_x - \beta ^\textsf{T}f_y > 0\) since f is in the relative interior of C. Therefore,
We are going to show that \(\nu - f \in C - f\), where \(\nu \) is defined by (7), and that every inequality is achieved with equality at that point. Since \(\nu \in C\), we have \(\nu - f \in C - f\). Evaluating the inequality defined by \(\beta \in D^m\) at \(\nu -f\) yields, \(\frac{\beta ^\textsf{T}(-f_y) - \lambda ^\textsf{T}(\nu - f_x)}{\lambda ^\textsf{T}f_x - \beta ^\textsf{T}f_y} = 1\), where the equality follow from \(\lambda ^\textsf{T}\nu =0\). It remains to prove that \(\{ \frac{\beta ^\textsf{T}y - \lambda ^\textsf{T}x}{\lambda ^\textsf{T}f_x - \beta ^\textsf{T}f_y} \,:\,\beta \in D^m\}\) is compact in order to be able to apply Lemma A.3. It is clear that the set is closed. The boundedness of the set follows from the fact that the denominator is away from 0. This last claim can be verified as follows, \(\inf _{\beta \in D^m} \lambda ^\textsf{T}f_x - \beta ^\textsf{T}f_y = \lambda ^\textsf{T}f_x - \Vert f_y\Vert \) and, by construction, \(\lambda = \frac{f_x}{\Vert f_x\Vert }\). Thus, \(\lambda ^\textsf{T}f_x - \Vert f_y\Vert = \Vert f_x\Vert - \Vert f_y\Vert > 0\) since \(f \notin S^g\).
Applying Lemma A.3 finally proves the result. \(\square \)
1.4 Proof of Proposition 5.2
Proposition 5.2
Let \(\phi \) be the minimal representation of \(C-f\) given in (9). Then \( \{ z \,:\,\phi (z) \le \tau \} = C - \nu - \tau (f - \nu ), \) where \(\nu \) is the apex of C defined in (7).
Proof
We first show that \(\{ z \,:\,\phi (z) \le \tau \} \subseteq C - \nu - \tau (f - \nu )\). Let z be such that \(\phi (z) \le \tau \). We have to show that there exists a \(\bar{z} \in C\) with \(z = \bar{z} - \nu - \tau (f - \nu )\). Choose \(\bar{z} = z + \nu + \tau (f - \nu )\). It is left to prove that \(\bar{z} \in C\), that is, \(\phi (\bar{z} - f) \le 1\):
where the first inequality follows from the sublinearity of \(\phi \) as shown in Proposition 5.1 and the second inequality follows from the hypothesis \(\phi (z) \le \tau \). If \(\phi (\alpha (\nu - f)) = \alpha \phi (\nu - f)\) for arbitrary \(\alpha \in \mathbb {R}\) (not only \(\alpha > 0\)), then \(\phi (\bar{z} - f) \le 1\) follows immediately since the apex \(\nu \) is on the boundary of C and therefore \(\phi (\nu - f) = 1\). Applying the formula for \(\phi \) in Proposition 5.1, since \(a^\textsf{T}(x_0 - f_x) + d^\textsf{T}(-f_y) = 0\), we have
The latter equation follows from \(\lambda ^\textsf{T}x_0 = 0\) and \( \phi (\nu - f) = 1\). Thus, \(\phi (\bar{z} - f) \le 1\) and therefore \(\bar{z} \in C\).
Now, we show the other inclusion, i.e., \(C - \nu - \tau (f - \nu ) \subseteq \{ z \,:\,\phi (z) \le \tau \}\). Let \(z \in C - \nu - \tau (f - \nu )\), then there exists a \(\bar{z} \in C\) such that \(z = \bar{z} - \nu - \tau (f - \nu )\). Since
it holds that \(z \in \{ z \,:\,\phi (z) \le \tau \}\). This proves the proposition. \(\square \)
1.5 Proof of \(C - M = L \cup C\)
Proposition A.1
It holds that \(C - M = L \cup C\).
Proof
Recall that
First, we show that \(C - M \subseteq L \cup C\). Let \((\bar{x},\bar{y}) \in C - M\). Then, there exists \((x, y) \in C\) and \(m \in M\) such that \((\bar{x},\bar{y}) = (x,y) - m\). We need to show that \((\bar{x},\bar{y}) \in L \cup C\).
Observe that if \(m = 0\), then \((\bar{x},\bar{y}) \in C\) and we are done. Therefore, we assume \(m \ne 0\).
We now show that \((\bar{x},\bar{y}) \in L\). It trivially holds that \(a^\textsf{T}\bar{x} + d^\textsf{T}\bar{y} = -1\). Since \(C - M\) is \(S^{g}\)-free by Theorem 4.2, we have that \(\Vert \bar{x} \Vert \ge \Vert \bar{y}\Vert \). Lastly, by definition of M, it holds that
By Lemma A.1, \((a - \lambda ^\textsf{T}a \lambda )^\textsf{T}x \ge -1\) for all \(x \in C\). This shows that the constraint \((a - \lambda ^\textsf{T}a \lambda )^\textsf{T}\bar{x} \ge 0\) is satisfied and therefore \((\bar{x},\bar{y}) \in L \subseteq L \cup C\).
Now, we show that \(L \cup C \subseteq C - M\). Let \((\bar{x},\bar{y}) \in L\), since the case \((\bar{x},\bar{y}) \in C\) is trivial. To show that \((\bar{x},\bar{y}) \in C - M\), we have to show that there exist \((x, y) \in C\) and \(m \in -M\) with \((\bar{x},\bar{y}) = (x, y) + m\). We choose \(m := (\bar{x},\bar{y}) - \nu \in L - \nu = -M\) and \((x,y) := \nu \in C\). This concludes the proof. \(\square \)
1.6 Proof of Proposition 5.3
The following lemma tells us, intuitively, that the “segment” joining the exposing point, without the exposing points, is in the relative interior of C. In other words, the part of the boundary of L that does not intersect \(S^{g}\) is in the relative interior of C. This result is used to prove Lemma 6.1 and Proposition 5.3.
Lemma A.4
Let \((x,y) \in \langle \{ \lambda , a \} \rangle \times \mathbb {R}^m\). If \(a^\textsf{T}x + d^\textsf{T}y = -1\), \(\Vert x\Vert > \Vert y\Vert \), and \((a - \lambda ^\textsf{T}a \lambda )^\textsf{T}x = 0\), then \((x,y) \in {{\,\textrm{ri}\,}}(C)\).
Proof
We show that \((x,y) \in {{\,\textrm{ri}\,}}(C)\) by proving that \(\Vert y \Vert < \lambda ^\textsf{T}x\). Given that \((x, y) \in \langle \{ \lambda , a \} \rangle \times \mathbb {R}^m\), (x, y) can be rewritten as \(x = \bar{\theta } a + \theta \lambda \) and \(y = \omega d + \rho \) with \(\bar{\theta }, \theta , \omega \in \mathbb {R}\) and \(\rho ^\textsf{T}d = 0\). The condition \((a - \lambda ^\textsf{T}a \lambda )^\textsf{T}x = 0\) then implies that \(\bar{\theta } = 0\). We thus need to prove that \(\Vert y \Vert < \theta \). Notice that the assumption \(\Vert x \Vert > \Vert y \Vert \) is equivalent to \(|\theta | > \Vert y \Vert \), so it suffices to show that \(\theta \ge 0\).
Expanding x and y in the constraint \(a^\textsf{T}x + d^\textsf{T}y = -1\), yields \(\lambda ^\textsf{T}a\theta + \omega \Vert d \Vert ^2 = -1\). Solving for \(\theta \) yields \(\theta = -\frac{1 + \omega \Vert d \Vert ^2}{\lambda ^\textsf{T}a}\). Note that \(\theta \) can thus be described as an increasing, linear function of \(\omega \), i.e., \(\theta (\omega ) = -\frac{1 + \omega \Vert d \Vert ^2}{\lambda ^\textsf{T}a}\). Showing that \(\theta (\omega ) \ge 0\) for the smallest possible \(\omega \) is then enough to prove \(\theta \ge 0\).
Squaring the constraint \(|\theta | > \Vert y \Vert \) and expanding the definitions we get the following constraint on \(\omega \):
Note that \(\Vert d \Vert ^2 (\Vert d \Vert ^2 - \lambda ^\textsf{T}a^2) < 0\), so the left-hand side is a concave quadratic function. The smallest value \(\omega \) can attain is the smallest root \(\omega _0\) of the quadratic function, that is,
We have
and therefore
The last equivalence is clearly satisfied, concluding the proof. \(\square \)
Proposition 5.3
Let \(\bar{\tau }\) be the largest root of the univariate quadratic equation \(\Vert l_x(\tau ) \Vert ^2 = \Vert l_y(\tau )\Vert ^2\). If the root exists and \(l(\bar{\tau }) \in L\), then \(\psi _{M}(r) = \bar{\tau }\). Otherwise, \(\psi _{M}(r) = \phi (r)\).
Proof
We recall that in the discussion preceeding the proposition statement we established that \(\psi _{M}(r) = \min \{\inf \{\tau \,:\,l(\tau ) \in L\}, \phi (r)\}\). In order to prove the proposition, we need to see how \(\inf \{\tau \,:\,l(\tau ) \in L\}\) compares with \(\phi (r)\).
First, notice that if the last infimum exists, then it is in the relative boundary of L. From the definition of L (see (6)) it is easy to see that
Thus, \(\inf \{\tau \,:\,l(\tau ) \in L\} \ge \inf \{\tau \,:\,l(\tau ) \in R\}\) and it is enough to prove that \(\inf \{\tau \,:\,l(\tau ) \in R\}\) exists. We have that
where the last equality follow from \((a - \lambda ^\textsf{T}a \lambda )^\textsf{T}f_x = 0\) and \((a - \lambda ^\textsf{T}a \lambda )^\textsf{T}x_0 = -1\). This shows that there exists \(\tau _0\) such that if \(\tau \le \tau _0\) then \((a - \lambda ^\textsf{T}a \lambda )^\textsf{T}l_x(\tau ) < 0\). That is, \(l(\tau ) \notin R\) for all \(\tau \le \tau _0\), thus \(\inf \{\tau \,:\,l(\tau ) \in R\}\) exists.
Let \(\tau _1 = \inf \{\tau \,:\,l(\tau ) \in L\}\). There are two cases, depending on which part of the relative boundary of L, \(l(\tau _1)\) is located.
Case 1: \(\Vert l_x(\tau _1)\Vert = \Vert l_y(\tau _1)\Vert \) and \((a - \lambda ^\textsf{T}a \lambda )^\textsf{T}l_x(\tau _1) \ge 0\).
We proceed to show that \(\tau _1\) corresponds to the largest root of \(\Vert l_x(\tau )\Vert ^2 = \Vert l_y(\tau )\Vert ^2\) and that \(\psi _{M}(r) = \tau _1\). This proves the first statement of the proposition.
We start by showing that \(\tau _1 \le \phi (r)\). Assume, by contradiction, that \(\phi (r) < \tau _1\). Then, \(\tau _2 = \phi (r)\) is such that \(l(\tau _2) \in C\). However, since \(f - \nu \in {{\,\textrm{ri}\,}}({{\,\textrm{rec}\,}}(C))\), we have that for every \(\tau > \tau _2\), \(l(\tau ) \in {{\,\textrm{ri}\,}}(C)\). To see this, recall that \(l(\tau ) = r + \nu + \tau (f - \nu )\). If \(\tau > \tau _2\), then \(l(\tau ) = l(\tau _2) + (\tau - \tau _2)(f - \nu ) \in C + {{\,\textrm{ri}\,}}({{\,\textrm{rec}\,}}(C)) \subseteq {{\,\textrm{ri}\,}}(C)\). In particular, \(l(\tau _1) \in {{\,\textrm{ri}\,}}(C)\), which is a contradiction with the fact that \(l(\tau _1) \in S^{g}\) and C is \(S^{g}\)-free.
We now show that \(\tau _1\) is the largest root of the quadratic equation \(\Vert l_x(\tau )\Vert ^2 = \Vert l_y(\tau )\Vert ^2\). Since \(l(\tau _1) \in L\) and \(L \subseteq C - M\) (see Proposition A.1), we have that \(l(\tau _1) \in C - M\). Similar to the argument above, \(l(\tau _1 + \epsilon ) = l(\tau _1) + \epsilon (f-\nu ) \in C - M + {{\,\textrm{ri}\,}}({{\,\textrm{rec}\,}}(C))\), for every \(\epsilon > 0\). Observe that \(C - M + {{\,\textrm{ri}\,}}({{\,\textrm{rec}\,}}(C)) \subseteq {{\,\textrm{ri}\,}}(C - M)\). To see this, let \((x,y) \in C, m \in M, (z,w) \in {{\,\textrm{ri}\,}}({{\,\textrm{rec}\,}}(C))\) and let \(\bar{\epsilon } > 0\) small enough such that \((z,w) + B_{\bar{\epsilon }}((0,0)) \subseteq {{\,\textrm{rec}\,}}(C)\). Then, \((x,y) - m + (z,w) + B_{\bar{\epsilon }}((0,0)) \subseteq C - M\), i.e., \((x,y) - m + (z,w) \in {{\,\textrm{ri}\,}}(C-M)\).
Therefore, \(l(\tau _1 + \epsilon ) \in {{\,\textrm{ri}\,}}(C - M)\), for every \(\epsilon > 0\). Thus, the equation (on \(\epsilon \)) \(\Vert l_x(\tau _1 + \epsilon )\Vert ^2 = \Vert l_y(\tau _1 + \epsilon )\Vert ^2\) cannot have a positive root, since otherwise \(l(\tau _1 + \epsilon ) \in S^{g}\) contradicting the fact that \(C - M\) is \(S^{g}\)-free as shown in Theorem 4.2. This implies that there is no root larger than \(\tau _1\), i.e., \(\tau _1\) is the largest root.
Case 2: \(\Vert l_x(\tau _1)\Vert > \Vert l_y(\tau _1)\Vert \) and \((a - \lambda ^\textsf{T}a \lambda )^\textsf{T}l_x(\tau _1) = 0\).
We proceed to show that \(\tau _1 > \phi (r)\), which shows the second claim and proves the proposition. It is enough to show that \(l(\tau _1) \in {{\,\textrm{ri}\,}}C\), since then \(\phi (r) = \inf \{\tau \,:\,l(\tau ) \in C\} < \tau _1\). However, this follows immediately from the hypothesis of the current case and Lemma A.4. \(\square \)
1.7 Proof of Lemma 6.1
Lemma 6.1
The relative boundary of \(C - M\) is contained in \(S^{g}- {{\,\textrm{rec}\,}}(C)\).
Proof
Let (x, y) be in the relative boundary of \(C - M\). By Proposition A.1, \(C - M = L \cup C\) and so \((x,y) \in L \cup C\). We proceed by cases.
Assume that \((x,y) \in L\). Then (x, y) must be in the boundary of L. Thus, (x, y) satisfies either \(\Vert x\Vert =\Vert y\Vert \), or \(\Vert x\Vert > \Vert y\Vert \), and \((a - \lambda ^\textsf{T}a \lambda )^\textsf{T}x = 0\). In the second case, Lemma A.4 implies that \((x,y) \in {{\,\textrm{ri}\,}}(C)\). However, by hypothesis, (x, y) is on the boundary of \(L \cup C\), thus this cannot be. We conclude that \(\Vert x\Vert =\Vert y\Vert \), which means that \((x,y) \in S^{g}\).
Assume now that \((x,y) \in C \setminus L\). Then (x, y) must be in the boundary of C. We show that \((x,y) \in S^{g}- {{\,\textrm{rec}\,}}(C)\) by exhibiting an \(r \in {{\,\textrm{rec}\,}}(C)\) such that \((x,y) + r \in S^{g}\).
Recall that the apex of C is
If (x, y) is the apex, then we can take \(r = e - (x_0,0)\) with \(e \in C \cap S^{g}\). If (x, y) is not the apex, then we take
To prove that \(r \in {{\,\textrm{rec}\,}}(C)\) we need to show that \(\frac{x_0^\textsf{T}x}{\Vert x_0\Vert ^2 - x_0^\textsf{T}x} \ge 0\). By Lemma A.1 the denominator is positive. Since \((x,y) \notin L\), the numerator is also positive.
Finally, we have to verify that \((x,y) + r \in S^{g}\). By construction, \(\lambda ^\textsf{T}(x + r_x) = \Vert y + r_y\Vert \), so it is enough to show that \(\lambda ^\textsf{T}(x + r_x) = \Vert x + r_x\Vert \). This follows from the fact that \(\{x_0, \lambda \}\) from an orthogonal basis of \(\langle \lambda , a \rangle \) and \(x_0^\textsf{T}(x + r_x) = 0\). Indeed,
\(\square \)
1.8 Proof of Proposition 7.1
To show Proposition 7.1, we are going to map C to \(\hat{C} :=T(C)\), where it is easier to find the apex \(\hat{\nu }\). Recall that \(T :s \mapsto (\hat{x}(s), \hat{y}(s), \hat{z}(s))\) with \((\hat{x}(s), \hat{y}(s), \hat{z}(s))\) as defined in Section 7. The apex of C is then given by \(\nu = T^{-1}(\hat{\nu })\).
To compute the apex of \(\hat{C}\), we have to intersect it with \({{\,\textrm{lin}\,}}(\hat{C})^{\perp }\), that is, the orthogonal of its lineality space. We have that
Since \({{\,\textrm{lin}\,}}(\hat{C}) = \langle \{ \lambda _x \} \rangle ^{\perp } \times \{ 0 \}^{|I_-|} \times \mathbb {R}^{|I_0|}\), we have \({{\,\textrm{lin}\,}}(\hat{C})^{\perp } = \langle \{ \lambda _x \} \rangle \times \mathbb {R}^{|I_-|} \times \{ 0 \}^{|I_0|}\). Thus,
Let us define \(\hat{\nu } :=(\hat{x}_0, \hat{y}_0, 0)\). Since the apex must satisfy \(\hat{y} = 0\) and \(\alpha \Vert \lambda _x \Vert ^2 + \lambda _{-1} = 0\), we get \(\hat{x}_0 = -\frac{\lambda _{-1}}{\Vert \lambda _x \Vert ^2}, \lambda _x = - \frac{\hat{x}(\bar{s})}{\Vert \hat{x}(\bar{s}) \Vert ^2}\) and \(\hat{y}_0 = 0\).
For the next step to compute \(\nu \), we need to find the inverse transformation \(T^{-1}\). Notice that \(T(s) = \frac{1}{\sqrt{\kappa }} \hat{\Theta } (V^\textsf{T}s + \hat{b})\) where \(\hat{\Theta } = {{\,\textrm{diag}\,}}(\hat{\theta })\) and \(\hat{b} :=(\hat{b}_i)_{i \in [p]}\) with
Since V and \(\hat{\Theta }\) are both invertible, the inverse of T is given by \(T^{-1}(\hat{x}, \hat{y}, \hat{z}) = V (\kappa \hat{\Theta }^{-1} (\hat{x}, \hat{y}, \hat{z}) - \hat{b})\).
Finally, the apex of C is
which concludes the proof.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Chmiela, A., Muñoz, G. & Serrano, F. Monoidal strengthening and unique lifting in MIQCPs. Math. Program. 210, 189–222 (2025). https://doi.org/10.1007/s10107-024-02112-0
Received:
Accepted:
Published:
Version of record:
Issue date:
DOI: https://doi.org/10.1007/s10107-024-02112-0
