Colloquium: Lagrangian Relaxation in Approximation Algorithms
4:35 p.m. Neill Hall 5W
Abstract: If we want to optimize an objective function subject to some nasty constraints, we can sometimes obtain useful information by allowing ourselves to violate the constraints, while penalizing violations in the objective function. This technique, called Lagrangian relaxation, has been a powerful tool in finding exact solutions for combinatorial optimization problems.We have used Lagrangian relaxation to construct improved polynomial time approximation algorithms for some NP-hard problems. In this talk we illustrate this technique, focusing onour results for the minimum latency problem. This problem models a delivery man who wishes to sequence his day's deliveries so that the average waiting time of his customers is minimized. Our deterministic algorithm returns a solution guaranteed to be within a factor of 7.18 of the optimum, and runs in time O(n^3 log n). A randomized version gives the same approximation guarantee (in expectation), and runs in time O(n^2 log^2 n). This improves over previous algorithms that could obtain a 10.77-approximation in time O(n^4 log n) or a (7.18+epsilon)-approximation in time n^O(1/epsilon), for any epsilon > 0.This is joint work with Asaf Levin and David Williamson.