Summary: What is and what do we need proactive key sharing for? We need to protect a secret from gradual breakins, system recovery, and reboot. Renewing the data is not a good (or sometimes possible) solution for long lived data; instead we renew the secret often. By renewing the secret, an attacker must compromise k+1 servers during one time interval instead of over the entire life of the system. For the system, we have the following assumption: We are using a (k+1, n)-threshold scheme, secure encryption and signatures exist for communication between servers, all servers are synchronized, there is secure broadcast over a common medium, and that share data is actually destroyed when it is erased. We also assume a mobile adversary, one where Byzantine faults can occur at any time, the adversary can corrupt no more than k out of n servers, and the adversary is connected to the communication medium but cannot interfere with communication. Adversaries are removable through reboot and honest servers always detect/remove misbehaving servers. The authors measure the security of the system in terms of entropy, that is, the scheme is semantically secure if for any function k computable on the secret, the difference in the probability of learning information between rounds is neglible. The system is robust in that we guarantee the correct reconstruction of the secret at any time. They use Shamir’s secret sharing as well as Verifiable Secret Sharing (VSS) to validate each share that a server receives. The paper notes that when using VSS, there is more information for an attacker to gain during verification. There are a few solutions to this, which include: using schemes where extra information is already released (el gamal sigs), place the secret in an envelope (encode secret in a longer bit string), and that there are a few different types of VSS (Pedersen vs. Feldman) with security tradeoffs. For periodic share renewal, each server has a pair of public/private keys used for secure communication; we assume the attacker cannot modify these keys. Basic share renewal protocol follows the intuition that we can update a polynomial by adding it to a new k degree random polynomial where the polynomial evaluated at 0 is equal to 0. Each server picks k random numbers and creates a new polynomial of degree k. For all other servers, that server sends out a new point in its evaluated polynomial. Then that server computes his new share and erases all other data. If everyone follows, it’s correct, secure, robust. If not, we can add verifiability using signed computation similar to VSS. If the messages are correct, the server will accept the new share. Otherwise, he will make an accusation to a misbehaving server. They go on to describe 3 possible types of faults when resolving accusations (incorrect message format, two or more correct but different messages from the same server, verifiability equations don’t match (which takes the most effort to solve)). Servers must make sure that other servers have not had their keys compromised, so we need a share recovery protocol. During initialization, each server stores a set of values for other server current shares. During share update, this set is also updated. If at any time the value of the new point does not match the one calculated, we need to perform share recovery. We can add verifiability to this protocol and multiple shares can be recovered in parallel by treating each share as its own secret. In conclusion, the total protocol works as follows: private key renewal protocol (if we choose to let attackers tamper with keys), share recovery protocol, share renewal protocol. The authors mention that this scheme can be used in proactive digital signatures and distributed certificate authorities. Pros: Interesting paper; Easy to read; Updating polynomials is very easy, but the proofs were well explainted; During share renewal, everybody needs to participate (this can be a pro or con); Servers can vote out bad servers that they accuse as behaving malicious; Group of k faulty servers can help recover shares. The authors use a lot of references in their proofs, which is good because they are not reinventing the wheel. Cons: Assumptions are difficult in a real world during implementation; need a global clock, synchrony. Scalability could be an issue; Again, still require some trusted intermediary. Discussion: 1. What are some of the scalability issues with this scheme? Everyone needs to participate during share update and thus, needs a global broadcast channel. The number of servers and protocol overhead will determine how big of an issue this is. Depending on how long rounds are and how distributed the servers are will also determine scalability 2. Is it possible to update some portion of people's shares without having everyone participate? This would help scalability, but we don't have any ideas. 3. How big of an issue is synchrony? This scheme seems to assume we need it, but is there a way to update some portion of keys without everyong having to do it at the same time. This may relax the need for a global clock as well. 4. What are realistic uses of this? Is this being used in distributed systems somewhere? If not, why not? We don't know specific uses of this, but it was mentioned that this might be used in the financial sectors or military operations. 5. How do you pick how long a round is? There is no general solution to this, but the things that it will depend on: size of k and size of n, how long it takes to break into server (30 seconds, 30 minutes?), type of application, how long it takes to break stack randomizer, how often do the group members change. Voting: 5 - strong accept 6 - accept