An IGP Weight Optimizer with Repetita

In this tutorial, we are going to use Repetita’s tools to solve a network optimization problem. The FlowSimulator class can be used the objectives returned by solvers, but it can actually also be used to make a reasonable optimizer.

The Problem: IGP Weight Optimization

The problem we will approach is the following: given a network topology, traffic matrix on this topology, and IGP weights, Equal Cost MultiPaths will route traffic in a deterministic way, incurring some flow on the (directed) edges of the network.

We are given a topology and a traffic matrix, and we can change the IGP weights to influence how the traffic is routed. Our goal is to find weights such that the maximum utilization of the edges, i.e. \(\max_{edge} \frac{flow(edge)}{capacity(edge)}\) is the smallest possible.

This problem is a variant of that approached in Fortz & Thorup - Internet traffic engineering by optimizing OSPF weights.

A local search approach: hill descent with restarts

The approach we propose to build here is rather simple: we will use a hill climbing and restart the search when it stagnates.

Starting from some solution (weight assignment), we will try and improve it by changing the weight of some edge and evaluating the resulting solution. In a descent iteration, we will try all possible moves there are from the current solution, select the best one, and apply it on the current solution. As long as we can improve the solution by doing do, we reiterate, until time runs out.

private double descent(long stopTime) {
  double bestScore = // TODO: evaluate current solution

  boolean improved = true;
  while (improved && System.nanoTime() < stopTime) {
    improved = false;

    int bestEdge = 0;
    int bestWeight = 0;

    // try all possible moves
    for (int edge = 0; edge < nEdges; edge++) {
      for (int weight = 1; weight <= maxWeight; weight++) {
        int oldWeight = topology.edgeWeight[edge];

        // make move from current solution
        topology.edgeWeight[edge] = weight;
        double newScore = // TODO: evaluate current solution

        // when new solution is better, remember it!
        if (newScore < bestScore) {
          improved = true;
          bestScore = newScore;
          bestEdge = edge;
          bestWeight = weight;
        }

        // put assignment back to current solution
        topology.edgeWeight[edge] = oldWeight;
      }
    }

    // if improvement was found, move to it
    if (improved) {
      topology.edgeWeight[bestEdge] = bestWeight;
    }
  }

  return bestScore;
}

Whenever a hill descent iteration cannot improve the solution, the descent is finished. Many meta-heuristics have been developped to continue to search for an even better solution, but we will choose the simplest one: restart from a random solution, and reiterate a hill climbing, until time runs out.

public void optimizeWeights(long timeMillis)
{
  long startTime = System.nanoTime();
  long stopTime = startTime + 1000000 * timeMillis;

  int[] savedWeights = topology.edgeWeight.clone();
  Random random = new Random();

  double bestScore = Double.MAX_VALUE;

  while (System.nanoTime() < stopTime) {
    double newScore = descent(stopTime);   // do one descent

    // when best score ever seen, remember weight assignment
    if (newScore < bestScore) {
      System.arraycopy(topology.edgeWeight, 0, savedWeights, 0, nEdges);
      bestScore = newScore;
    }

    // randomize edge weights for next descent
    for (int edge = 0; edge < nEdges; edge++) {
      topology.edgeWeight[edge] = 1 + random.nextInt(maxWeight - 1);
    }
  }

  // copy best solution to current weights
  System.arraycopy(savedWeights, 0, topology.edgeWeight, 0, nEdges);
}

Using Repetita to build an algorithm

The resulting algorithm is contained in class be.ac.ucl.ingi.repetita.wo.solver.hillClimbingLS.HillClimbing.

The most difficult part of the algorithm is not the local search itself, but rather how to evaluate a solution. The solution of [Fortz & Thorup] is to do so incrementally, but we show that we can get reasonable performance by simply recomputing the flow from zero.

To evaluate the solution, we first pass the Topology instance to an ECMPFlowSimulator, which will use it internally, as well as a traffic matrix.

flowSimulator = new ECMPFlowSimulator(topology, traffic);

When we change edge weights, we only have to call

flowSimulator.computeFlows();
double newScore = flowSimulator.maxUtilization();

This will simulate the flow incurred by new weights, and evaluate the new maximum utilization. Internally, ECMPFlowSimulator uses a ShortestPaths instance to recompute shortest paths, and computes the flow according to the shortest path DAGs.

Benchmarking the algorithm

While we hope that the algorithm will return with a good solution, we have no guarantee about this: indeed, the problem is NP-hard.

Typically, state-of-the-art algorithms will apply a set of techniques that will hopefully return a solution of very good quality, sometimes they may even be able to prove optimality of their solution.

We want to assess the performance of our algorithm experimentally, and compare it to other algorithms. In order to do so, we would like to run our algorithm and others on many instances, preferrably in parallel to save time on the experiments, retrieve results and present them in some format. We would also like to check that the solution is valid, i.e. that the weights are positive and that the objective value is correct w.r.t. the weights.

In order to do that for IGP weight optimization, Repetita provides a straightforward framework. The algorithm is embedded in a class that implements the WOSolver interface, which should simply solve its instance when told to, and provide some basic information:

public interface WOSolver {
  void solve(long milliseconds);  // run the solver on its instance
  long solveTime();               // solving time of last solve(t)
  double getObjective();          // objective of last solve(t)
  String name();                  // solver's name
}

Making our hill climbing with restarts algorithm implement a WOSolver is easy:

@Override public void solve(long timeMillis) {
  long startTime = System.nanoTime();
  optimizeWeights(timeMillis);
  solveTimeValue = System.nanoTime() - startTime;
}

private long solveTimeValue;

@Override public long solveTime() {
  return solveTimeValue;
}

private double objective;

@Override public double getObjective() {
  return objective;
}

Of course, solveTimeValue and objective should be filled adequately at each solve(t).

To run the benchmark itself, class Benchmarker takes arguments from the command line, such as a directory of instances, a set of solvers, and a time limit, creates of WOSolver for each instance - solver pair, runs them in parallel using OptimizeMaxUtilization, collects the objective values, and writes the result grid out in a .csv format.

The job of OptimizeMaxUtilization is to check the validity of weights and objective value, which it does using ECMPFlowSimulator, as we just saw. The checker’s value is the one reported to Benchmarker; the solver’s value should be close to the checker’s to be valid.