OR-Tools  9.0
revised_simplex.h
Go to the documentation of this file.
1 // Copyright 2010-2021 Google LLC
2 // Licensed under the Apache License, Version 2.0 (the "License");
3 // you may not use this file except in compliance with the License.
4 // You may obtain a copy of the License at
5 //
6 // http://www.apache.org/licenses/LICENSE-2.0
7 //
8 // Unless required by applicable law or agreed to in writing, software
9 // distributed under the License is distributed on an "AS IS" BASIS,
10 // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 // See the License for the specific language governing permissions and
12 // limitations under the License.
13 
14 // Solves a Linear Programing problem using the Revised Simplex algorithm
15 // as described by G.B. Dantzig.
16 // The general form is:
17 // min c.x where c and x are n-vectors,
18 // subject to Ax = b where A is an mxn-matrix, b an m-vector,
19 // with l <= x <= u, i.e.
20 // l_i <= x_i <= u_i for all i in {1 .. m}.
21 //
22 // c.x is called the objective function.
23 // Each row a_i of A is an n-vector, and a_i.x = b_i is a linear constraint.
24 // A is called the constraint matrix.
25 // b is called the right hand side (rhs) of the problem.
26 // The constraints l_i <= x_i <= u_i are called the generalized bounds
27 // of the problem (most introductory textbooks only deal with x_i >= 0, as
28 // did the first version of the Simplex algorithm). Note that l_i and u_i
29 // can be -infinity and +infinity, respectively.
30 //
31 // To simplify the entry of data, this code actually handles problems in the
32 // form:
33 // min c.x where c and x are n-vectors,
34 // subject to:
35 // A1 x <= b1
36 // A2 x >= b2
37 // A3 x = b3
38 // l <= x <= u
39 //
40 // It transforms the above problem into
41 // min c.x where c and x are n-vectors,
42 // subject to:
43 // A1 x + s1 = b1
44 // A2 x - s2 = b2
45 // A3 x = b3
46 // l <= x <= u
47 // s1 >= 0, s2 >= 0
48 // where xT = (x1, x2, x3),
49 // s1 is an m1-vector (m1 being the height of A1),
50 // s2 is an m2-vector (m2 being the height of A2).
51 //
52 // The following are very good references for terminology, data structures,
53 // and algorithms. They all contain a wealth of references.
54 //
55 // Vasek Chvátal, "Linear Programming," W.H. Freeman, 1983. ISBN 978-0716715870.
56 // http://www.amazon.com/dp/0716715872
57 //
58 // Robert J. Vanderbei, "Linear Programming: Foundations and Extensions,"
59 // Springer, 2010, ISBN-13: 978-1441944979
60 // http://www.amazon.com/dp/1441944974
61 //
62 // Istvan Maros, "Computational Techniques of the Simplex Method.", Springer,
63 // 2002, ISBN 978-1402073328
64 // http://www.amazon.com/dp/1402073321
65 //
66 // ===============================================
67 // Short description of the dual simplex algorithm.
68 //
69 // The dual simplex algorithm uses the same data structure as the primal, but
70 // progresses towards the optimal solution in a different way:
71 // * It tries to keep the dual values dual-feasible at all time which means that
72 // the reduced costs are of the correct sign depending on the bounds of the
73 // non-basic variables. As a consequence the values of the basic variable are
74 // out of bound until the optimal is reached.
75 // * A basic leaving variable is selected first (dual pricing) and then a
76 // corresponding entering variable is selected. This is done in such a way
77 // that the dual objective value increases (lower bound on the optimal
78 // solution).
79 // * Once the basis pivot is chosen, the variable values and the reduced costs
80 // are updated the same way as in the primal algorithm.
81 //
82 // Good references on the Dual simplex algorithm are:
83 //
84 // Robert Fourer, "Notes on the Dual simplex Method", March 14, 1994.
85 // http://users.iems.northwestern.edu/~4er/WRITINGS/dual.pdf
86 //
87 // Achim Koberstein, "The dual simplex method, techniques for a fast and stable
88 // implementation", PhD, Paderborn, Univ., 2005.
89 // http://digital.ub.uni-paderborn.de/hs/download/pdf/3885?originalFilename=true
90 
91 #ifndef OR_TOOLS_GLOP_REVISED_SIMPLEX_H_
92 #define OR_TOOLS_GLOP_REVISED_SIMPLEX_H_
93 
94 #include <cstdint>
95 #include <string>
96 #include <vector>
97 
98 #include "absl/random/bit_gen_ref.h"
100 #include "ortools/base/macros.h"
105 #include "ortools/glop/pricing.h"
108 #include "ortools/glop/status.h"
109 #include "ortools/glop/update_row.h"
112 #include "ortools/lp_data/lp_data.h"
118 #include "ortools/util/time_limit.h"
119 
120 namespace operations_research {
121 namespace glop {
122 
123 // Entry point of the revised simplex algorithm implementation.
125  public:
126  RevisedSimplex();
127 
128  // Sets or gets the algorithm parameters to be used on the next Solve().
129  void SetParameters(const GlopParameters& parameters);
130  const GlopParameters& GetParameters() const { return parameters_; }
131 
132  // Solves the given linear program.
133  //
134  // Expects that the linear program is in the equations form Ax = 0 created by
135  // LinearProgram::AddSlackVariablesForAllRows, i.e. the rightmost square
136  // submatrix of A is an identity matrix, all its columns have been marked as
137  // slack variables, and the bounds of all constraints have been set to [0, 0].
138  // Returns ERROR_INVALID_PROBLEM, if these assumptions are violated.
139  //
140  // By default, the algorithm tries to exploit the computation done during the
141  // last Solve() call. It will analyze the difference of the new linear program
142  // and try to use the previously computed solution as a warm-start. To disable
143  // this behavior or give explicit warm-start data, use one of the State*()
144  // functions below.
145  ABSL_MUST_USE_RESULT Status Solve(const LinearProgram& lp,
147 
148  // Do not use the current solution as a warm-start for the next Solve(). The
149  // next Solve() will behave as if the class just got created.
150  void ClearStateForNextSolve();
151 
152  // Uses the given state as a warm-start for the next Solve() call.
153  void LoadStateForNextSolve(const BasisState& state);
154 
155  // Advanced usage. Tells the next Solve() that the matrix inside the linear
156  // program will not change compared to the one used the last time Solve() was
157  // called. This allows to bypass the somewhat costly check of comparing both
158  // matrices. Note that this call will be ignored if Solve() was never called
159  // or if ClearStateForNextSolve() was called.
161 
162  // Getters to retrieve all the information computed by the last Solve().
163  RowIndex GetProblemNumRows() const;
164  ColIndex GetProblemNumCols() const;
167  int64_t GetNumberOfIterations() const;
168  Fractional GetVariableValue(ColIndex col) const;
169  Fractional GetReducedCost(ColIndex col) const;
170  const DenseRow& GetReducedCosts() const;
171  Fractional GetDualValue(RowIndex row) const;
172  Fractional GetConstraintActivity(RowIndex row) const;
173  VariableStatus GetVariableStatus(ColIndex col) const;
174  ConstraintStatus GetConstraintStatus(RowIndex row) const;
175  const BasisState& GetState() const;
176  double DeterministicTime() const;
177  bool objective_limit_reached() const { return objective_limit_reached_; }
178 
179  // If the problem status is PRIMAL_UNBOUNDED (respectively DUAL_UNBOUNDED),
180  // then the solver has a corresponding primal (respectively dual) ray to show
181  // the unboundness. From a primal (respectively dual) feasible solution any
182  // positive multiple of this ray can be added to the solution and keep it
183  // feasible. Moreover, by doing so, the objective of the problem will improve
184  // and its magnitude will go to infinity.
185  //
186  // Note that when the problem is DUAL_UNBOUNDED, the dual ray is also known as
187  // the Farkas proof of infeasibility of the problem.
188  const DenseRow& GetPrimalRay() const;
189  const DenseColumn& GetDualRay() const;
190 
191  // This is the "dual ray" linear combination of the matrix rows.
192  const DenseRow& GetDualRayRowCombination() const;
193 
194  // Returns the index of the column in the basis and the basis factorization.
195  // Note that the order of the column in the basis is important since it is the
196  // one used by the various solve functions provided by the BasisFactorization
197  // class.
198  ColIndex GetBasis(RowIndex row) const;
199 
201  return update_row_.ComputeAndGetUnitRowLeftInverse(row);
202  }
203 
204  // Returns a copy of basis_ vector for outside applications (like cuts) to
205  // have the correspondence between rows and columns of the dictionary.
206  RowToColMapping GetBasisVector() const { return basis_; }
207 
209 
210  // Returns statistics about this class as a string.
211  std::string StatString();
212 
213  // Computes the dictionary B^-1*N on-the-fly row by row. Returns the resulting
214  // matrix as a vector of sparse rows so that it is easy to use it on the left
215  // side in the matrix multiplication. Runs in O(num_non_zeros_in_matrix).
216  // TODO(user): Use row scales as well.
217  RowMajorSparseMatrix ComputeDictionary(const DenseRow* column_scales);
218 
219  // Initializes the matrix for the given 'linear_program' and 'state' and
220  // computes the variable values for basic variables using non-basic variables.
221  void ComputeBasicVariablesForState(const LinearProgram& linear_program,
222  const BasisState& state);
223 
224  // This is used in a MIP context to polish the final basis. We assume that the
225  // columns for which SetIntegralityScale() has been called correspond to
226  // integral variable once multiplied by the given factor.
227  void ClearIntegralityScales() { integrality_scale_.clear(); }
228  void SetIntegralityScale(ColIndex col, Fractional scale);
229 
230  private:
231  // Propagates parameters_ to all the other classes that need it.
232  //
233  // TODO(user): Maybe a better design is for them to have a reference to a
234  // unique parameters object? It will clutter a bit more these classes'
235  // constructor though.
236  void PropagateParameters();
237 
238  // Returns a string containing the same information as with GetSolverStats,
239  // but in a much more human-readable format. For example:
240  // Problem status : Optimal
241  // Solving time : 1.843
242  // Number of iterations : 12345
243  // Time for solvability (first phase) : 1.343
244  // Number of iterations for solvability : 10000
245  // Time for optimization : 0.5
246  // Number of iterations for optimization : 2345
247  // Maximum time allowed in seconds : 6000
248  // Maximum number of iterations : 1000000
249  // Stop after first basis : 0
250  std::string GetPrettySolverStats() const;
251 
252  // Returns a string containing formatted information about the variable
253  // corresponding to column col.
254  std::string SimpleVariableInfo(ColIndex col) const;
255 
256  // Displays a short string with the current iteration and objective value.
257  void DisplayIterationInfo() const;
258 
259  // Displays the error bounds of the current solution.
260  void DisplayErrors() const;
261 
262  // Displays the status of the variables.
263  void DisplayInfoOnVariables() const;
264 
265  // Displays the bounds of the variables.
266  void DisplayVariableBounds();
267 
268  // Displays the following information:
269  // * Linear Programming problem as a dictionary, taking into
270  // account the iterations that have been made;
271  // * Variable info;
272  // * Reduced costs;
273  // * Variable bounds.
274  // A dictionary is in the form:
275  // xB = value + sum_{j in N} pa_ij x_j
276  // z = objective_value + sum_{i in N} rc_i x_i
277  // where the pa's are the coefficients of the matrix after the pivotings
278  // and the rc's are the reduced costs, i.e. the coefficients of the objective
279  // after the pivotings.
280  // Dictionaries are the modern way of presenting the result of an iteration
281  // of the Simplex algorithm in the literature.
282  void DisplayRevisedSimplexDebugInfo();
283 
284  // Displays the Linear Programming problem as it was input.
285  void DisplayProblem() const;
286 
287  // Returns the current objective value. This is just the sum of the current
288  // variable values times their current cost.
289  Fractional ComputeObjectiveValue() const;
290 
291  // Returns the current objective of the linear program given to Solve() using
292  // the initial costs, maximization direction, objective offset and objective
293  // scaling factor.
294  Fractional ComputeInitialProblemObjectiveValue() const;
295 
296  // Assigns names to variables. Variables in the input will be named
297  // x1..., slack variables will be s1... .
298  void SetVariableNames();
299 
300  // Sets the variable status and derives the variable value according to the
301  // exact status definition. This can only be called for non-basic variables
302  // because the value of a basic variable is computed from the values of the
303  // non-basic variables.
304  void SetNonBasicVariableStatusAndDeriveValue(ColIndex col,
305  VariableStatus status);
306 
307  // Checks if the basis_ and is_basic_ arrays are well formed. Also checks that
308  // the variable statuses are consistent with this basis. Returns true if this
309  // is the case. This is meant to be used in debug mode only.
310  bool BasisIsConsistent() const;
311 
312  // Moves the column entering_col into the basis at position basis_row. Removes
313  // the current basis column at position basis_row from the basis and sets its
314  // status to leaving_variable_status.
315  void UpdateBasis(ColIndex entering_col, RowIndex basis_row,
316  VariableStatus leaving_variable_status);
317 
318  // Initializes matrix-related internal data. Returns true if this data was
319  // unchanged. If not, also sets only_change_is_new_rows to true if compared
320  // to the current matrix, the only difference is that new rows have been
321  // added (with their corresponding extra slack variables). Similarly, sets
322  // only_change_is_new_cols to true if the only difference is that new columns
323  // have been added, in which case also sets num_new_cols to the number of
324  // new columns.
325  bool InitializeMatrixAndTestIfUnchanged(const LinearProgram& lp,
326  bool* only_change_is_new_rows,
327  bool* only_change_is_new_cols,
328  ColIndex* num_new_cols);
329 
330  // Checks if the only change to the bounds is the addition of new columns,
331  // and that the new columns have at least one bound equal to zero.
332  bool OldBoundsAreUnchangedAndNewVariablesHaveOneBoundAtZero(
333  const LinearProgram& lp, ColIndex num_new_cols);
334 
335  // Initializes objective-related internal data. Returns true if unchanged.
336  bool InitializeObjectiveAndTestIfUnchanged(const LinearProgram& lp);
337 
338  // Computes the stopping criterion on the problem objective value.
339  void InitializeObjectiveLimit(const LinearProgram& lp);
340 
341  // Initializes the starting basis. In most cases it starts by the all slack
342  // basis and tries to apply some heuristics to replace fixed variables.
343  ABSL_MUST_USE_RESULT Status CreateInitialBasis();
344 
345  // Sets the initial basis to the given columns, try to factorize it and
346  // recompute the basic variable values.
347  ABSL_MUST_USE_RESULT Status
348  InitializeFirstBasis(const RowToColMapping& initial_basis);
349 
350  // Entry point for the solver initialization.
351  ABSL_MUST_USE_RESULT Status Initialize(const LinearProgram& lp);
352 
353  // Saves the current variable statuses in solution_state_.
354  void SaveState();
355 
356  // Displays statistics on what kinds of variables are in the current basis.
357  void DisplayBasicVariableStatistics();
358 
359  // Tries to reduce the initial infeasibility (stored in error_) by using the
360  // singleton columns present in the problem. A singleton column is a column
361  // with only one non-zero. This is used by CreateInitialBasis().
362  void UseSingletonColumnInInitialBasis(RowToColMapping* basis);
363 
364  // Returns the number of empty rows in the matrix, i.e. rows where all
365  // the coefficients are zero.
366  RowIndex ComputeNumberOfEmptyRows();
367 
368  // Returns the number of empty columns in the matrix, i.e. columns where all
369  // the coefficients are zero.
370  ColIndex ComputeNumberOfEmptyColumns();
371 
372  // This method transforms a basis for the first phase, with the optimal
373  // value at zero, into a feasible basis for the initial problem, thus
374  // preparing the execution of phase-II of the algorithm.
375  void CleanUpBasis();
376 
377  // If the primal maximum residual is too large, recomputes the basic variable
378  // value from the non-basic ones. This function also perturbs the bounds
379  // during the primal simplex if too many iterations are degenerate.
380  //
381  // Only call this on a refactorized basis to have the best precision.
382  void CorrectErrorsOnVariableValues();
383 
384  // Computes b - A.x in error_
385  void ComputeVariableValuesError();
386 
387  // Solves the system B.d = a where a is the entering column (given by col).
388  // Known as FTRAN (Forward transformation) in FORTRAN codes.
389  // See Chvatal's book for more detail (Chapter 7).
390  void ComputeDirection(ColIndex col);
391 
392  // Computes a - B.d in error_ and return the maximum std::abs() of its coeffs.
393  Fractional ComputeDirectionError(ColIndex col);
394 
395  // Computes the ratio of the basic variable corresponding to 'row'. A target
396  // bound (upper or lower) is chosen depending on the sign of the entering
397  // reduced cost and the sign of the direction 'd_[row]'. The ratio is such
398  // that adding 'ratio * d_[row]' to the variable value changes it to its
399  // target bound.
400  template <bool is_entering_reduced_cost_positive>
401  Fractional GetRatio(const DenseRow& lower_bounds,
402  const DenseRow& upper_bounds, RowIndex row) const;
403 
404  // First pass of the Harris ratio test. Returns the harris ratio value which
405  // is an upper bound on the ratio value that the leaving variable can take.
406  // Fills leaving_candidates with the ratio and row index of a super-set of the
407  // columns with a ratio <= harris_ratio.
408  template <bool is_entering_reduced_cost_positive>
409  Fractional ComputeHarrisRatioAndLeavingCandidates(
410  Fractional bound_flip_ratio, SparseColumn* leaving_candidates) const;
411 
412  // Chooses the leaving variable, considering the entering column and its
413  // associated reduced cost. If there was a precision issue and the basis is
414  // not refactorized, set refactorize to true. Otherwise, the row number of the
415  // leaving variable is written in *leaving_row, and the step length
416  // is written in *step_length.
417  Status ChooseLeavingVariableRow(ColIndex entering_col,
418  Fractional reduced_cost, bool* refactorize,
419  RowIndex* leaving_row,
420  Fractional* step_length,
422 
423  // Chooses the leaving variable for the primal phase-I algorithm. The
424  // algorithm follows more or less what is described in Istvan Maros's book in
425  // chapter 9.6 and what is done for the dual phase-I algorithm which was
426  // derived from Koberstein's PhD. Both references can be found at the top of
427  // this file.
428  void PrimalPhaseIChooseLeavingVariableRow(ColIndex entering_col,
429  Fractional reduced_cost,
430  bool* refactorize,
431  RowIndex* leaving_row,
432  Fractional* step_length,
433  Fractional* target_bound) const;
434 
435  // Chooses an infeasible basic variable. The returned values are:
436  // - leaving_row: the basic index of the infeasible leaving variable
437  // or kNoLeavingVariable if no such row exists: the dual simplex algorithm
438  // has terminated and the optimal has been reached.
439  // - cost_variation: how much do we improve the objective by moving one unit
440  // along this dual edge.
441  // - target_bound: the bound at which the leaving variable should go when
442  // leaving the basis.
443  ABSL_MUST_USE_RESULT Status DualChooseLeavingVariableRow(
444  RowIndex* leaving_row, Fractional* cost_variation,
446 
447  // Updates the prices used by DualChooseLeavingVariableRow() after a simplex
448  // iteration by using direction_. The prices are stored in
449  // dual_pricing_vector_. Note that this function only takes care of the
450  // entering and leaving column dual feasibility status change and that other
451  // changes will be dealt with by DualPhaseIUpdatePriceOnReducedCostsChange().
452  void DualPhaseIUpdatePrice(RowIndex leaving_row, ColIndex entering_col);
453 
454  // This must be called each time the dual_pricing_vector_ is changed at
455  // position row.
456  template <bool use_dense_update = false>
457  void OnDualPriceChange(const DenseColumn& squared_norms, RowIndex row,
458  VariableType type, Fractional threshold);
459 
460  // Updates the prices used by DualChooseLeavingVariableRow() when the reduced
461  // costs of the given columns have changed.
462  template <typename Cols>
463  void DualPhaseIUpdatePriceOnReducedCostChange(const Cols& cols);
464 
465  // Same as DualChooseLeavingVariableRow() but for the phase I of the dual
466  // simplex. Here the objective is not to minimize the primal infeasibility,
467  // but the dual one, so the variable is not chosen in the same way. See
468  // "Notes on the Dual simplex Method" or Istvan Maros, "A Piecewise Linear
469  // Dual Phase-1 Algorithm for the Simplex Method", Computational Optimization
470  // and Applications, October 2003, Volume 26, Issue 1, pp 63-81.
471  // http://rd.springer.com/article/10.1023%2FA%3A1025102305440
472  ABSL_MUST_USE_RESULT Status DualPhaseIChooseLeavingVariableRow(
473  RowIndex* leaving_row, Fractional* cost_variation,
475 
476  // Makes sure the boxed variable are dual-feasible by setting them to the
477  // correct bound according to their reduced costs. This is called
478  // Dual feasibility correction in the literature.
479  //
480  // Note that this function is also used as a part of the bound flipping ratio
481  // test by flipping the boxed dual-infeasible variables at each iteration.
482  //
483  // If update_basic_values is true, the basic variable values are updated.
484  template <typename BoxedVariableCols>
485  void MakeBoxedVariableDualFeasible(const BoxedVariableCols& cols,
486  bool update_basic_values);
487 
488  // Computes the step needed to move the leaving_row basic variable to the
489  // given target bound.
490  Fractional ComputeStepToMoveBasicVariableToBound(RowIndex leaving_row,
492 
493  // Returns true if the basis obtained after the given pivot can be factorized.
494  bool TestPivot(ColIndex entering_col, RowIndex leaving_row);
495 
496  // Gets the current LU column permutation from basis_representation,
497  // applies it to basis_ and then sets it to the identity permutation since
498  // it will no longer be needed during solves. This function also updates all
499  // the data that depends on the column order in basis_.
500  void PermuteBasis();
501 
502  // Updates the system state according to the given basis pivot.
503  // Returns an error if the update could not be done because of some precision
504  // issue.
505  ABSL_MUST_USE_RESULT Status UpdateAndPivot(ColIndex entering_col,
506  RowIndex leaving_row,
508 
509  // Displays all the timing stats related to the calling object.
510  void DisplayAllStats();
511 
512  // Returns whether or not a basis refactorization is needed at the beginning
513  // of the main loop in Minimize() or DualMinimize(). The idea is that if a
514  // refactorization is going to be needed by one of the components, it is
515  // better to do that as soon as possible so that every component can take
516  // advantage of it.
517  bool NeedsBasisRefactorization(bool refactorize);
518 
519  // Calls basis_factorization_.Refactorize() depending on the result of
520  // NeedsBasisRefactorization(). Invalidates any data structure that depends
521  // on the current factorization. Sets refactorize to false.
522  Status RefactorizeBasisIfNeeded(bool* refactorize);
523 
524  // Minimize the objective function, be it for satisfiability or for
525  // optimization. Used by Solve().
526  ABSL_MUST_USE_RESULT Status Minimize(TimeLimit* time_limit);
527 
528  // Same as Minimize() for the dual simplex algorithm.
529  // TODO(user): remove duplicate code between the two functions.
530  ABSL_MUST_USE_RESULT Status DualMinimize(bool feasibility_phase,
532 
533  // Experimental. This is useful in a MIP context. It performs a few degenerate
534  // pivot to try to mimize the fractionality of the optimal basis.
535  //
536  // We assume that the columns for which SetIntegralityScale() has been called
537  // correspond to integral variable once scaled by the given factor.
538  //
539  // I could only find slides for the reference of this "LP Solution Polishing
540  // to improve MIP Performance", Matthias Miltenberger, Zuse Institute Berlin.
541  ABSL_MUST_USE_RESULT Status Polish(TimeLimit* time_limit);
542 
543  // Utility functions to return the current ColIndex of the slack column with
544  // given number. Note that currently, such columns are always present in the
545  // internal representation of a linear program.
546  ColIndex SlackColIndex(RowIndex row) const;
547 
548  // Advances the deterministic time in time_limit with the difference between
549  // the current internal deterministic time and the internal deterministic time
550  // during the last call to this method.
551  // TODO(user): Update the internals of revised simplex so that the time
552  // limit is updated at the source and remove this method.
553  void AdvanceDeterministicTime(TimeLimit* time_limit);
554 
555  // Problem status
556  ProblemStatus problem_status_;
557 
558  // Current number of rows in the problem.
559  RowIndex num_rows_;
560 
561  // Current number of columns in the problem.
562  ColIndex num_cols_;
563 
564  // Index of the first slack variable in the input problem. We assume that all
565  // variables with index greater or equal to first_slack_col_ are slack
566  // variables.
567  ColIndex first_slack_col_;
568 
569  // We're using vectors after profiling and looking at the generated assembly
570  // it's as fast as std::unique_ptr as long as the size is properly reserved
571  // beforehand.
572 
573  // Compact version of the matrix given to Solve().
574  CompactSparseMatrix compact_matrix_;
575 
576  // The transpose of compact_matrix_, it may be empty if it is not needed.
577  CompactSparseMatrix transposed_matrix_;
578 
579  // Stop the algorithm and report feasibility if:
580  // - The primal simplex is used, the problem is primal-feasible and the
581  // current objective value is strictly lower than primal_objective_limit_.
582  // - The dual simplex is used, the problem is dual-feasible and the current
583  // objective value is strictly greater than dual_objective_limit_.
584  Fractional primal_objective_limit_;
585  Fractional dual_objective_limit_;
586 
587  // Current objective (feasibility for Phase-I, user-provided for Phase-II).
588  DenseRow current_objective_;
589 
590  // Array of coefficients for the user-defined objective.
591  // Indexed by column number. Used in Phase-II.
592  DenseRow objective_;
593 
594  // Objective offset and scaling factor of the linear program given to Solve().
595  // This is used to display the correct objective values in the logs with
596  // ComputeInitialProblemObjectiveValue().
597  Fractional objective_offset_;
598  Fractional objective_scaling_factor_;
599 
600  // Used in dual phase I to keep track of the non-basic dual infeasible
601  // columns and their sign of infeasibility (+1 or -1).
602  DenseRow dual_infeasibility_improvement_direction_;
603  int num_dual_infeasible_positions_;
604 
605  // A temporary scattered column that is always reset to all zero after use.
606  ScatteredColumn initially_all_zero_scratchpad_;
607 
608  // Array of column index, giving the column number corresponding
609  // to a given basis row.
610  RowToColMapping basis_;
611 
612  // Vector of strings containing the names of variables.
613  // Indexed by column number.
615 
616  // Information about the solution computed by the last Solve().
617  Fractional solution_objective_value_;
618  DenseColumn solution_dual_values_;
619  DenseRow solution_reduced_costs_;
620  DenseRow solution_primal_ray_;
621  DenseColumn solution_dual_ray_;
622  DenseRow solution_dual_ray_row_combination_;
623  BasisState solution_state_;
624  bool solution_state_has_been_set_externally_;
625 
626  // Flag used by NotifyThatMatrixIsUnchangedForNextSolve() and changing
627  // the behavior of Initialize().
628  bool notify_that_matrix_is_unchanged_ = false;
629 
630  // This is known as 'd' in the literature and is set during each pivot to the
631  // right inverse of the basic entering column of A by ComputeDirection().
632  // ComputeDirection() also fills direction_.non_zeros with the position of the
633  // non-zero.
634  ScatteredColumn direction_;
635  Fractional direction_infinity_norm_;
636 
637  // Used to compute the error 'b - A.x' or 'a - B.d'.
638  DenseColumn error_;
639 
640  // A random number generator. In test we use absl_random_ to have a
641  // non-deterministic behavior and avoid client depending on a golden optimal
642  // solution which prevent us from easily changing the solver.
643  random_engine_t deterministic_random_;
644 #ifndef NDEBUG
645  absl::BitGen absl_random_;
646 #endif
647  absl::BitGenRef random_;
648 
649  // Representation of matrix B using eta matrices and LU decomposition.
650  BasisFactorization basis_factorization_;
651 
652  // Classes responsible for maintaining the data of the corresponding names.
653  VariablesInfo variables_info_;
654  PrimalEdgeNorms primal_edge_norms_;
655  DualEdgeNorms dual_edge_norms_;
656  DynamicMaximum<RowIndex> dual_prices_;
657  VariableValues variable_values_;
658  UpdateRow update_row_;
659  ReducedCosts reduced_costs_;
660  EnteringVariable entering_variable_;
661  PrimalPrices primal_prices_;
662 
663  // Used in dual phase I to hold the price of each possible leaving choices.
664  DenseColumn dual_pricing_vector_;
665 
666  // Temporary memory used by DualMinimize().
667  std::vector<ColIndex> bound_flip_candidates_;
668 
669  // Total number of iterations performed.
670  uint64_t num_iterations_;
671 
672  // Number of iterations performed during the first (feasibility) phase.
673  uint64_t num_feasibility_iterations_;
674 
675  // Number of iterations performed during the second (optimization) phase.
676  uint64_t num_optimization_iterations_;
677 
678  // Deterministic time for DualPhaseIUpdatePriceOnReducedCostChange().
679  int64_t num_update_price_operations_ = 0;
680 
681  // Total time spent in Solve().
682  double total_time_;
683 
684  // Time spent in the first (feasibility) phase.
685  double feasibility_time_;
686 
687  // Time spent in the second (optimization) phase.
688  double optimization_time_;
689 
690  // The internal deterministic time during the most recent call to
691  // RevisedSimplex::AdvanceDeterministicTime.
692  double last_deterministic_time_update_;
693 
694  // Statistics about the iterations done by Minimize().
695  struct IterationStats : public StatsGroup {
696  IterationStats()
697  : StatsGroup("IterationStats"),
698  total("total", this),
699  normal("normal", this),
700  bound_flip("bound_flip", this),
701  refactorize("refactorize", this),
702  degenerate("degenerate", this),
703  num_dual_flips("num_dual_flips", this),
704  degenerate_run_size("degenerate_run_size", this) {}
705  TimeDistribution total;
706  TimeDistribution normal;
707  TimeDistribution bound_flip;
708  TimeDistribution refactorize;
709  TimeDistribution degenerate;
710  IntegerDistribution num_dual_flips;
711  IntegerDistribution degenerate_run_size;
712  };
713  IterationStats iteration_stats_;
714 
715  struct RatioTestStats : public StatsGroup {
716  RatioTestStats()
717  : StatsGroup("RatioTestStats"),
718  bound_shift("bound_shift", this),
719  abs_used_pivot("abs_used_pivot", this),
720  abs_tested_pivot("abs_tested_pivot", this),
721  abs_skipped_pivot("abs_skipped_pivot", this),
722  direction_density("direction_density", this),
723  leaving_choices("leaving_choices", this),
724  num_perfect_ties("num_perfect_ties", this) {}
725  DoubleDistribution bound_shift;
726  DoubleDistribution abs_used_pivot;
727  DoubleDistribution abs_tested_pivot;
728  DoubleDistribution abs_skipped_pivot;
729  RatioDistribution direction_density;
730  IntegerDistribution leaving_choices;
731  IntegerDistribution num_perfect_ties;
732  };
733  mutable RatioTestStats ratio_test_stats_;
734 
735  // Placeholder for all the function timing stats.
736  // Mutable because we time const functions like ChooseLeavingVariableRow().
737  mutable StatsGroup function_stats_;
738 
739  // Proto holding all the parameters of this algorithm.
740  //
741  // Note that parameters_ may actually change during a solve as the solver may
742  // dynamically adapt some values. It is why we store the argument of the last
743  // SetParameters() call in initial_parameters_ so the next Solve() can reset
744  // it correctly.
745  GlopParameters parameters_;
746  GlopParameters initial_parameters_;
747 
748  // LuFactorization used to test if a pivot will cause the new basis to
749  // not be factorizable.
750  LuFactorization test_lu_;
751 
752  // Number of degenerate iterations made just before the current iteration.
753  int num_consecutive_degenerate_iterations_;
754 
755  // Indicate if we are in the feasibility_phase (1st phase) or not.
756  bool feasibility_phase_;
757 
758  // Indicates whether simplex ended due to the objective limit being reached.
759  // Note that it's not enough to compare the final objective value with the
760  // limit due to numerical issues (i.e., the limit which is reached within
761  // given tolerance on the internal objective may no longer be reached when the
762  // objective scaling and offset are taken into account).
763  bool objective_limit_reached_;
764 
765  // Temporary SparseColumn used by ChooseLeavingVariableRow().
766  SparseColumn leaving_candidates_;
767 
768  // Temporary vector used to hold the best leaving column candidates that are
769  // tied using the current choosing criteria. We actually only store the tied
770  // candidate #2, #3, ...; because the first tied candidate is remembered
771  // anyway.
772  std::vector<RowIndex> equivalent_leaving_choices_;
773 
774  // This is used by Polish().
775  DenseRow integrality_scale_;
776 
777  DISALLOW_COPY_AND_ASSIGN(RevisedSimplex);
778 };
779 
780 // Hides the details of the dictionary matrix implementation. In the future,
781 // GLOP will support generating the dictionary one row at a time without having
782 // to store the whole matrix in memory.
784  public:
786 
787  // RevisedSimplex cannot be passed const because we have to call a non-const
788  // method ComputeDictionary.
789  // TODO(user): Overload this to take RevisedSimplex* alone when the
790  // caller would normally pass a nullptr for col_scales so this and
791  // ComputeDictionary can take a const& argument.
793  RevisedSimplex* revised_simplex)
794  : dictionary_(
795  ABSL_DIE_IF_NULL(revised_simplex)->ComputeDictionary(col_scales)),
796  basis_vars_(ABSL_DIE_IF_NULL(revised_simplex)->GetBasisVector()) {}
797 
798  ConstIterator begin() const { return dictionary_.begin(); }
799  ConstIterator end() const { return dictionary_.end(); }
800 
801  size_t NumRows() const { return dictionary_.size(); }
802 
803  // TODO(user): This function is a better fit for the future custom iterator.
804  ColIndex GetBasicColumnForRow(RowIndex r) const { return basis_vars_[r]; }
805  SparseRow GetRow(RowIndex r) const { return dictionary_[r]; }
806 
807  private:
808  const RowMajorSparseMatrix dictionary_;
809  const RowToColMapping basis_vars_;
810  DISALLOW_COPY_AND_ASSIGN(RevisedSimplexDictionary);
811 };
812 
813 // TODO(user): When a row-by-row generation of the dictionary is supported,
814 // implement DictionaryIterator class that would call it inside operator*().
815 
816 } // namespace glop
817 } // namespace operations_research
818 
819 #endif // OR_TOOLS_GLOP_REVISED_SIMPLEX_H_
#define ABSL_DIE_IF_NULL
Definition: base/logging.h:41
size_type size() const
ParentType::const_iterator const_iterator
Definition: strong_vector.h:90
A simple class to enforce both an elapsed time limit and a deterministic time limit in the same threa...
Definition: time_limit.h:105
RevisedSimplexDictionary(const DenseRow *col_scales, RevisedSimplex *revised_simplex)
RowMajorSparseMatrix::const_iterator ConstIterator
const GlopParameters & GetParameters() const
const DenseRow & GetDualRayRowCombination() const
Fractional GetVariableValue(ColIndex col) const
void SetIntegralityScale(ColIndex col, Fractional scale)
Fractional GetConstraintActivity(RowIndex row) const
VariableStatus GetVariableStatus(ColIndex col) const
Fractional GetReducedCost(ColIndex col) const
const DenseColumn & GetDualRay() const
ABSL_MUST_USE_RESULT Status Solve(const LinearProgram &lp, TimeLimit *time_limit)
RowMajorSparseMatrix ComputeDictionary(const DenseRow *column_scales)
Fractional GetDualValue(RowIndex row) const
ConstraintStatus GetConstraintStatus(RowIndex row) const
void ComputeBasicVariablesForState(const LinearProgram &linear_program, const BasisState &state)
void LoadStateForNextSolve(const BasisState &state)
const BasisFactorization & GetBasisFactorization() const
ColIndex GetBasis(RowIndex row) const
void SetParameters(const GlopParameters &parameters)
const ScatteredRow & GetUnitRowLeftInverse(RowIndex row)
const ScatteredRow & ComputeAndGetUnitRowLeftInverse(RowIndex leaving_row)
Definition: update_row.cc:50
SatParameters parameters
SharedTimeLimit * time_limit
ColIndex col
Definition: markowitz.cc:183
RowIndex row
Definition: markowitz.cc:182
StrictITIVector< ColIndex, Fractional > DenseRow
Definition: lp_types.h:300
Collection of objects used to extend the Constraint Solver library.
std::mt19937 random_engine_t
Definition: random_engine.h:23
Fractional target_bound
std::vector< double > lower_bounds
std::vector< double > upper_bounds