David Molnar (ephermata) wrote,
David Molnar
ephermata

  • Music:

Cryptographic commitments

So the CRYPTO deadline is finally past, and in the meantime Bernard Chazelle wrote an essay about what computer science is and is not. Following the discussion by Lance, Ernie, Scott, and Suresh, among others, I find myself reflecting on some of the ways of seeing the world I've picked up from cryptography and computer security...


In particular, I've noticed that I make certain assumptions about the world when I set up a problem. These assumptions are what make it possible for me to map a messy real-world problem into a model where I can apply the tools I have to solve the problem. You might call these philosophical commitments. I'm at the point where these cryptographic "commitments" seem so natural to me that it's a shock when I run across technical work that makes different assumptions.



  • Everything in sight is an (efficient) algorithm. Efficient, in turn, means probabilistic polynomial time. If we
    have a problem where Alice and Bob communicate, but they want to prevent Eve from listening in, then Alice and Bob are going to "be" probabilistic polynomial time Turing Machines or possibly families of polynomial-size circuits. Eve might be allowed arbitrarily large running time, but in any case Eve is an algorithm, and one not known to Alice and Bob (or me) in advance.


  • Adversaries act arbitrarily, adaptively, and can pick their moves after the system is set in stone. As a result, all inputs are adversarial; if anything can go wrong, it will. Ross Anderson and Roger Needham
    call this in the context of computer security programming Satan's computer.

    This assumption makes fields that assume some kind of distribution on inputs or on errors seem extremely strange. Electrical engineering does this in the context of coding theory, and it's also common in statistical learning theory. These fields have enormous practical impact and relevance, which I acknowledge and appreciate. I just have a hard time with the shock of "well, why do we believe errors must be distributed like that?"

  • Adversaries will do anything to "break" the system, so long as it results in an efficient algorithm. The mere existence of an efficient algorithm that breaks the system is bad, regardless of that algorithm's other qualities.


Of course, these assumptions are not unique to cryptography. The first one is shared throughout most of theoretical computer science, for instance. Taken together, though, they form a basis for setting up a problem in
cryptography. We will sometimes relax bits and pieces of these assumptions, such as considering non-adaptive adversaries, but it's a starting point.

In fact, the first assumption extends beyond computer science to a principle for choosing between different theories.
The field that's received the most attention recently that I know of is economics. To illustrate, let's suppose that we are trying to choose between two methods for holding an auction, Method A and Method B. Method A requires the people participating to do 2^(2^100) arithmetic operations before returning the winner. Method B requires only
two arithmetic operations. Method B is more efficient, computationally, than method A, so B is preferable
regardless of A's other advantages. From my point of view, method A might as well not even exist, regardless
of how desirable it might otherwise be compared to method B. (At best, A is an inspiration to create a new method C that is like A, but efficient.)

Possibly more subtle is this example: suppose we have two theories as to why, say, real estate prices are still so high in the Bay Area after the end of dot-com boom. Theory A implicitly or explicitly requires homebuyers to solve a problem that will take them 2^(2^100) steps. Theory B does not. From my point of view, theory A will be difficult to believe, regardless of how well it performs on other criteria, such as predicting real home prices.

This is different from the auction example, because we aren't choosing between two different lists of instructions for players at an auction. Instead we are trying to choose between two different explanations for the way things are (or at least have been observed so far). We can extend this to other areas, such as physics and even mathematics, but those are whole other stories in themselves.

The thing that seems funny here is that if I really believe the first assumption, the one I use every day, then a theory that is necessarily inefficient must be wrong. (Here by "necessarily" I mean something like we can show a lower bound on the running time of players if the theory is to be true, to be distinguished from a theory where a 'more efficient'
version might be possible). I don't have to believe that the most efficient theory is the "best," but I do have to rule out those theories with necessary inefficiencies.

So that's a first approximation of how one can apply this principle of computational efficiency. I am glossing over issues of asymptotic complexity, NP-hardness, how to encode inputs, uniformity, quantum vs. classical computation, and
whether choice A requires solving a problem that is always grossly inefficient or just inefficient in the worst case but efficient in practice. Still, once we start getting into these issues, we have already accepted the basic principle that an "efficient" algorithm is what we want...we're just arguing the semantics of "efficient."

When applied to economics, this gives us computational game theory, which has had a huge amount of work in the last 10-15 years. I'm not going to try summarizing everyone's contributions, since I'd probably leave out a lot
of important work, but instead refer you to a course Christos Papadimitriou gave: http://www.cs.berkeley.edu/~christos/games/cs294.html

The interplay between CS and economics goes the other way, too. The third assumption above says that if any efficient adversary exists, then a system is broken. Economics has the idea of "self-interested" adversaries -- these are parties that are not actually evil, just selfish. As a result, if hurting someone else will make them worse off, these self-interested adversaries won't do it. You can actually obtain more efficient protocols by adopting the economics point of view; in some cases you can remove checking for cases that a self-interested adversary just wouldn't consider.

Would you like to learn more? As it turns out, Avi Wigderson is giving a course at Princeton about the concept that everything can be modelled as an efficient algorithm. You can see the web page for "The Efficient Universe" here:
http://www.cs.princeton.edu/courses/archive/spring06/cos345/

It's aimed at a wider audience than just computer science majors, namely anyone who feels they might be interested in answering some of the big questions listed on the page. I'll be interested to see how the course and its aftermath plays out - will we see new insights as to how efficiency can apply to other fields beyond computer science? will they discover more funny consequences of computational efficiency? or something else entirely?

Subscribe

  • Post a new comment

    Error

    default userpic
    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 4 comments