-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation - Centralized Telescope: Main #101
base: main
Are you sure you want to change the base?
Conversation
cdee2bf
to
22a9c73
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Tiny changes here and there and a big one about H1 (and the order of limit bound vs expanding the tuple).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Just added a couple of suggestions in core component.
Is there a particular reason why you don't include v
in the sequence? E.g. "A bounded DFS search is used to construct the sequence
- The prover retries the proof generation process up to $r$ times. | ||
- Each retry uses a different retry counter $v$, where $v \in \[1, r\]$, to diversify the search process. | ||
- Hash-based binning: | ||
- Elements in $S_p$ are prehashed into bins using $H_0(v, \cdot) where v \in \[1,r\]$, grouping elements based on their hash values. We prehashed using the retry counter to get different bins for each repetition, reducing the risk of badly distributed bins. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Elements in $S_p$ are prehashed into bins using $H_0(v, \cdot) where v \in \[1,r\]$, grouping elements based on their hash values. We prehashed using the retry counter to get different bins for each repetition, reducing the risk of badly distributed bins. | |
- Elements in $S_p$ are prehashed into bins using $H_0(v, \cdot)\ where\ v \in \[1,r\]$, grouping elements based on their hash values. We prehashed using the retry counter to get different bins for each repetition, reducing the risk of badly distributed bins. |
- Retries with index $v$: | ||
- The prover retries the proof generation process up to $r$ times. | ||
- Each retry uses a different retry counter $v$, where $v \in \[1, r\]$, to diversify the search process. | ||
- Hash-based binning: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Hash-based binning: | |
- Seeded binning: |
|
||
## Overview | ||
- When $n_p$ is large, the rapid growth in potential proof tuples increases the chances of finding a valid proof, making construction easier. | ||
- For small $n_p$, limited elements reduce the probability of finding a valid proof in a single attempt. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The real reason is that when n_p is small, the arrangement of balls into bins is far from uniform. On average each bin still gets exactly one ball, but it's likely that many bins will be empty and some will have many elements. Turns out it's better to have many bins with one ball than one bin with many balls.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When n_p is small, we still can use the scheme with prehashing, but the completeness error is moderate instead of tiny. By introducing retries, the completeness error is decreased.
- The prover retries the proof generation process up to $r$ times. | ||
- Each retry uses a different retry counter $v$, where $v \in \[1, r\]$, to diversify the search process. | ||
- Hash-based binning: | ||
- Elements in $S_p$ are prehashed into bins using $H_0(v, \cdot) where v \in \[1,r\]$, grouping elements based on their hash values. We prehashed using the retry counter to get different bins for each repetition, reducing the risk of badly distributed bins. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Elements in $S_p$ are prehashed into bins using $H_0(v, \cdot) where v \in \[1,r\]$, grouping elements based on their hash values. We prehashed using the retry counter to get different bins for each repetition, reducing the risk of badly distributed bins. | |
- Elements in $S_p$ are prehashed into bins using $H_0(v, \cdot)$ where $v \in \[1,r\]$, grouping elements based on their hash values. We prehashed using the retry counter to get different bins for each repetition, reducing the risk of badly distributed bins. |
4. **Bounded DFS**: | ||
- A bounded DFS search is used to construct the sequence $(t, s_1, ..., s_u)$, with a shared step limit $B$ applied across all starting points $t$. | ||
- At each step of DFS: | ||
- The algorithm search a new element for the current sequence $(t, s_1, ..., s_k)$ in bin numbered $H_1(v, t, s_1, ..., s_k)$. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- The algorithm search a new element for the current sequence $(t, s_1, ..., s_k)$ in bin numbered $H_1(v, t, s_1, ..., s_k)$. | |
- The algorithm searches a new element for the current sequence $(t, s_1, ..., s_k)$ in bin numbered $H_1(v, t, s_1, ..., s_k)$. |
- With the new $v$, the hash function $H_0(v, s)$ is applied to organize elements of $S_p$ into bins, and the process resumes from step 3. | ||
7. **Completion**: | ||
- If a valid proof is found in any retry, the prover outputs the proof immediately. | ||
- If all $r$ retries are exhausted without finding a valid proof, the process terminates, and the prover outputs $\bot$. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will there be pseudocode like in previous docs?
Content
This PR includes general information about the Section 3.2.2. Generalization to small$n_p$ ,
centralized_telescope
.The further details such as parameter setup, algorithms will be handled in individual PRs.
Pre-submit checklist
Issue(s)
Closes #100