Oct 4 (Monday) | Dina Goldin, University of Connecticut Interaction: Conjectures, Results, and Myths Room: Moore B03 Time: 4pm - 5pm. |
Oct 21 (Thursday) |
Jon Feldman (D'97), Columbia University Proving Strong Error Bounds using a Dual Witness Room: Carson L01 Time: 4pm - 5pm. |
Nov 15 (Monday) |
Peter Winkler, Dartmouth College Mixing and Shuffling Room: Moore B03 Time: 4pm - 5pm. |
Computer technology has shifted from mainframes to locally networked workstations and now to mobile internet-based computing devices. Software engineering has evolved from procedure-oriented to object-oriented and component-based systems. AI has refocused from logical reasoning and search algorithms to agent-oriented and distributed behaviors. These parallel changes exemplify a conceptual paradigm shift from algorithms to interaction. Interactive computational processes allow for input and output to take place during the computation, in contrast to the traditional "algorithmic" models of computation which transform a predefined input into output.
Though interaction is an intuitively obvious idea, there has been no automata-theoretic formalization of interactive computation until now. We introduce Persistent Turing Machines (PTMs), which serve as a model for sequential interactive computation, and allow us to explore the expressiveness of interactive computation. PTMs are multitape Turing Machines (TMs) with a persistent internal tape, and stream-based semantics. We formulate observation-based notions of system equivalence and computational expressiveness, and apply them to demonstrate that PTMs are more expressive than TMs, thus proving Wegner's conjecture that "interaction is more powerful than algorithms".
We end on a light note by considering the historic reasons for the "Turing Thesis Myth", which states that TMs model all computation. This claim, incorrectly reinterpreting the Turing Thesis, is fundamental to the main-stream theory of computation. We show how this myth can be traced to the establishment of computer science as a separate discipline in the 1960's.
Jon Feldman, Columbia University
We show that using low-density parity-check (LDPC) codes with
sufficient expansion, the ``Linear Programming (LP) Decoder'' corrects
a constant fraction of errors in an adversarial channel. Using
Spielman's more general ``expander codes,'' we also show that LP
decoding attains the capacity (optimal rate) of a wide range of
probabilistic communication channels. These results are the first of their kind for LP decoding, and more
generally for decoders with the ``ML certificate'' property --- where
output codewords come with a proof of optimality. Furthermore, no such
results have been proved for the conventional method to decode LDPC
codes: the belief-propagation algorithm. We prove our bounds using a new ``dual witness'' technique. Under the
appropriate conditions for the error bounds we wish to prove
(adversarial or probabilistic), we demonstrate a zero-valued dual
solution to the decoding LP. The existence of such a solution implies
that the LP succeeds in recovering the original codeword. [The content of this talk is joint work with Tal Malkin, Cliff
Stein, Rocco Servedio and Martin Wainwright, and (has appeared, will
appear) at ISIT '04 and SODA '05]
Peter Winkler, Dartmouth College
Mixing in Markov chains is `big business' these days in the
theory of computing, on account of its importance in approximate
sampling and estimation. We will take a look at how some modern
techniques (joint work with Laszlo Lovasz) apply to an ancient mixing
problem: how many times should you shuffle a deck of cards? Three
different shuffling models lead to two gratifying answers and one
stultifying open problem---the answer to which might help us understand
a bit more about the connection between mixing and computational
complexity.
Back to DTS main page .
Proving Strong Error Bounds using a Dual Witness
Mixing and Shuffling