For the current term (Fall 2006), this seminar series is organized by Amit Chakrabarti.
To non-Dartmouth theoreticians: If you are going to be visiting any nearby institutions (in Boston, New York, or Amherst, say) and would like to give a talk as part of the Dartmouth Theory Seminar series, please contact Amit.
Schedules from past terms: [ Fall 2005 | Spring+Summer 2005 | Winter 2005 | Fall 2004 | Spring 2004 ]
Oct 4 (Wed) | Josh Buresh-Oppenheim, Simon Fraser University Towards Models for Backtracking and Dynamic Programming Room: Carpenter 013 (note unusual location!) Time: 4pm - 5pm. |
Oct 30 (Mon) | Sidharth Jaggi, MIT Fighting Byzantine Adversaries in Networks: Network Error-Correcting Codes Room: Carson L02 Time: 4pm - 5pm. |
Nov 27 (Mon) | Nick Harvey, MIT The Conjugate Gradient Method with Automatic Preconditioning Room: Carson L02 Time: 4pm - 5pm. |
Dec 4 (Mon) | Leslie Valiant, Harvard University Biology as Computation Room: Carson L02 Time: 4pm - 5pm. |
Josh Buresh-Oppenheim, Simon Fraser University
Towards Models for Backtracking and Dynamic Programming
Since most algorithms that people use can intuitively be classified into large paradigms of algorithms such as greedy, dynamic programming, linear and semidefinite programming, local search, etc., it is natural to ask what problems can be computed by these paradigms. This question can be seen as an alternative to asking what problems can be computed by all, say, poly-time algorithms in that the restriction on the algorithms is more conceptual rather than complexity-theoretic. Of course, to ask a question about an algorithmic paradigm, you first have to formally define the paradigm. We offer one very natural model, pBT, which captures many algorithms generally considered to be dynamic programming or backtracking. This model is an extension of the Priority Algorithm model of Borodin, Nielsen and Rackoff, which captures greedy algorithms. We demonstrate many upper and lower bounds in this model for problems such as interval scheduling, knapsack and SAT. We then define a stronger model, pBP, and show that it seems to capture some aspects of dynamic programming that pBT does not, but still does not solve even tractable problems that do not intuitively have dynamic programming algorithms.
This is joint work with Mikhail Alekhnovich, Allan Borodin, Sashka Davis, Russell Impagliazzo, Avner Magen and Toni Pitassi.
Sidharth Jaggi, MIT It was shown by Ahlswede et al. that in general network coding is
required to attain the multicast capacity. But since network
coding involves mixing of information inside the network, a single
corrupted packet generated by a malicious node can end up
contaminating all the information reaching a destination, preventing
decoding. This talk introduces the first distributed
polynomial-time rate-optimal network error-correcting codes that
work in the presence of Byzantine nodes. We present algorithms that
target adversaries with different attacking capabilities. When the
adversary can eavesdrop on all links and jam z links, our first
algorithm achieves a rate of (C - 2z), where C is the network capacity.
In contrast, when the adversary has limited snooping capabilities, we
provide algorithms that achieve the higher rate of (C - z). Our
algorithms attain the optimal rate given the strength of the
adversary. They are information-theoretically secure. They can be
designed and operated in a distributed manner, assume no knowledge of
the topology, and can be designed and implemented in polynomial time.
Furthermore, only the source and destination need to be modified;
non-malicious nodes inside the network are oblivious to the presence
of adversaries and implement a classical distributed network code.
Finally, our algorithms work over wired and wireless networks. This is joint work done with Michael Langberg, Sachin Katti, Tracey
Ho, Dina Katabi, and Muriel Medard.
Nick Harvey, MIT Solving a linear system is one of the most fundamental
computational problems. Unfortunately, the basic algorithm that most
of us learn (Gaussian Elimination) is often useless in practice due to
slow running time or stability issues. Instead, it is more common to
use iterative solvers, the simplest ones being steepest descent and
conjugate gradient. The snag with iterative solvers is that their
performance often depends on the "condition number" of the given
system, so it is common to modify the system by applying a
"preconditioner" matrix which reduces the condition number. This
raises a key question: given a linear system, how can we find a good
preconditioner? In this work, we develop a variant of conjugate gradient method
which *automatically* constructs good preconditioners. The general
idea is very simple. We run the conjugate gradient method until it
"gets stuck". The fact that it is stuck then implies a way to modify
the preconditioner so that the conjugate gradient steps will be "less
stuck" in the future. This talk will be self-contained -- the audience only needs to know
basic linear algebra, and how to interpret pictures of algorithms that
are stuck. Joint work with John Dunagan, Microsoft Research.
Leslie Valiant, Harvard University We argue that computational models have an essential role in
uncovering the principles behind a variety of biological phenomena. In
particular we consider recent results relating to the following three
questions: How can brains, given their known resource constraints such
as the sparsity of connections and slow elements, do any significant
information processing at all? How can evolution, in only a few
billion years, evolve such complex mechanisms as it apparently has?
How can cognitive systems manipulate large amounts of such uncertain
knowledge and get usefully reliable results? We show that each of
these problems can be formulated as a quantitative question for a
computational model, and argue that solutions to these formulations
provide some understanding of these biological phenomena. This talk will be accessible to graduate and undergraduate
students.
Back to Amit Chakrabarti's
home page.
Fighting Byzantine Adversaries in Networks: Network Error-Correcting Codes
The Conjugate Gradient Method with Automatic Preconditioning
Biology as Computation