2 min read

Greedy Restart Schedules: A Simple Yet Powerful Baseline for Dynamic Algorithm Selection in Optimization

Greedy Restart Schedules: A Simple Yet Powerful Baseline for Dynamic Algorithm Selection in Optimization

How a Simple Greedy Approach is Closing the Gap Between Specialized and General Optimization Algorithms

In the world of numerical black-box optimization, there's an inherent tension between specialized algorithms that excel at specific problems and general-purpose solvers that perform decently across a wide range of challenges. This tradeoff, famously encapsulated by the "no free lunch" theorems, has led researchers to explore meta-algorithmic approaches that try to get the best of both worlds.

A new paper by Lennart Schäpermeier from TU Dresden introduces a surprisingly effective solution to this problem: Greedy Restart Schedules (GRS). This simple yet powerful approach creates static schedules for alternating between different optimization algorithms, significantly outperforming individual solvers while avoiding the complexity of more sophisticated dynamic algorithm selection methods.

The Optimization Dilemma

When facing an unknown optimization problem, practitioners typically have two suboptimal choices:

  1. Specialized algorithms that excel on certain problem types but fail on others
  2. General-purpose solvers that perform decently across many problems but can't match specialized performance

Previous attempts to bridge this gap have included:

  • Automated Algorithm Selection (AAS): Uses machine learning to select algorithms based on problem features
  • Dynamic Algorithm Selection (DAS): Switches algorithms mid-run based on optimization progress
  • Hand-crafted hybrid heuristics: Combines algorithms through expert-designed strategies

While these approaches show promise, they often come with significant drawbacks - complex training procedures, limited generalization, or requiring domain expertise to implement effectively.

The Greedy Restart Schedule Approach

Schäpermeier's GRS method takes a different tack. Rather than trying to predict which algorithm will work best, it creates a static schedule that:

  1. Starts with the algorithm that solves the most problems quickly
  2. Then selects algorithms that perform best on the remaining unsolved problems
  3. Repeats this process to create a sequence of algorithm restarts

The key insight is that by focusing on the distribution of unsolved problems at each step, the schedule naturally adapts to problem difficulty without requiring complex modeling or real-time decision-making.

Surprising Performance Gains

When tested on the standard BBOB benchmark suite (24 functions across 2-10 dimensions), the GRS approach delivered remarkable results:

  • Closed 95% of the gap between single best solver and virtual best solver (oracle) on relative expected running time
  • Outperformed state-of-the-art hybrid heuristics like HCMA and BIPOP-CMA-ES in lower dimensions
  • Maintained strong performance even when evaluated on problems not seen during training

The schedules themselves reveal interesting patterns - often starting with fast local optimizers before transitioning to more robust (but slower) evolutionary approaches as problems become harder to solve.

Why This Matters for Business

For companies relying on optimization across diverse problems - from supply chain logistics to hyperparameter tuning - GRS offers:

  1. Simplicity: No complex machine learning models to train or maintain
  2. Reliability: Consistent performance across problem types
  3. Performance: Near-oracle results without the need for problem-specific tuning
  4. Flexibility: Easy to incorporate new algorithms as they're developed

As Schäpermeier notes, this approach serves as both a practical tool and a valuable baseline for more complex dynamic algorithm selection methods. Its strong performance suggests that sometimes, simple greedy approaches can outperform more sophisticated techniques.

The Future of Algorithm Scheduling

While already impressive, the GRS method opens several avenues for future research:

  • Incorporating algorithm runtime prediction to optimize budget allocation
  • Extending to other optimization domains like combinatorial problems
  • Developing more sophisticated scheduling techniques beyond the greedy approach

For now, businesses looking to improve their optimization pipelines would do well to consider this simple yet effective approach. As the paper demonstrates, sometimes the most elegant solutions come from rethinking basic assumptions rather than building increasingly complex systems.

Read the full paper for technical details and experimental results: