Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding telescope benchmarks #55

Open
wants to merge 22 commits into
base: main
Choose a base branch
from
Open

Adding telescope benchmarks #55

wants to merge 22 commits into from

Conversation

rrtoledo
Copy link
Collaborator

@rrtoledo rrtoledo commented Nov 6, 2024

Content

Relates to #54

Depends on #105

@rrtoledo rrtoledo added the enhancement New feature or request label Nov 6, 2024
@rrtoledo rrtoledo self-assigned this Nov 6, 2024
@rrtoledo rrtoledo marked this pull request as draft November 6, 2024 13:13
@rrtoledo rrtoledo linked an issue Nov 6, 2024 that may be closed by this pull request
@rrtoledo rrtoledo force-pushed the raph@bench-centralized branch 2 times, most recently from de0fb2a to 88d69ab Compare November 11, 2024 12:47
@rrtoledo rrtoledo marked this pull request as ready for review November 11, 2024 13:22
@rrtoledo
Copy link
Collaborator Author

@tolikzinovyev @curiecrypt @djetchev

Should we remove the benchmarks checking the variance of the number of repetitions (the dfs_bound is so high that we never repeat, which makes criterion fails when generating the plot), or should we lower dfs_bound to observe some variances in retry_counter?
-> Do we want the retry_counter to increase?

Some local tests showed me that we would need to reduce dfs_bound by a factor 100. Choosing this number would feel a bit arbitrary (Is this the right number for all lambdas...?) so I am not sure it is the best option.

@tolikzinovyev
Copy link
Member

@tolikzinovyev @curiecrypt @djetchev

Should we remove the benchmarks checking the variance of the number of repetitions (the dfs_bound is so high that we never repeat, which makes criterion fails when generating the plot), or should we lower dfs_bound to observe some variances in retry_counter? -> Do we want the retry_counter to increase?

Some local tests showed me that we would need to reduce dfs_bound by a factor 100. Choosing this number would feel a bit arbitrary (Is this the right number for all lambdas...?) so I am not sure it is the best option.

Removing the benchmark sounds reasonable!

@rrtoledo
Copy link
Collaborator Author

I find @tolikzinovyev idea the best. What do you think @curiecrypt @djetchev ?

@tolikzinovyev
Copy link
Member

I will not have time to review this PR before Monday, unfortunately, since it is a pretty large.

Copy link
Member

@tolikzinovyev tolikzinovyev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice usage of advanced criterion features. Left high-level comments.

benches/centralized_telescope.rs Outdated Show resolved Hide resolved
benches/centralized_telescope.rs Outdated Show resolved Hide resolved
benches/centralized_telescope.rs Outdated Show resolved Hide resolved
benches/centralized_telescope.rs Outdated Show resolved Hide resolved
benches/centralized_telescope.rs Outdated Show resolved Hide resolved
benches/criterion_helpers.rs Outdated Show resolved Hide resolved
src/centralized_telescope/algorithm.rs Outdated Show resolved Hide resolved
src/centralized_telescope/algorithm.rs Outdated Show resolved Hide resolved
benches/centralized_telescope.rs Outdated Show resolved Hide resolved
benches/centralized_telescope.rs Outdated Show resolved Hide resolved
Copy link
Collaborator

@curiecrypt curiecrypt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it seems pretty good and mature. I left a few comments and could go for another round after the conversations are resolved.

Cargo.toml Outdated Show resolved Hide resolved
benches/centralized_telescope.rs Outdated Show resolved Hide resolved
benches/centralized_telescope.rs Outdated Show resolved Hide resolved
benches/centralized_telescope.rs Outdated Show resolved Hide resolved
benches/centralized_telescope.rs Outdated Show resolved Hide resolved
benches/centralized_telescope.rs Outdated Show resolved Hide resolved
benches/centralized_telescope.rs Outdated Show resolved Hide resolved
benches/criterion_helpers.rs Outdated Show resolved Hide resolved
benches/criterion_helpers.rs Outdated Show resolved Hide resolved
src/centralized_telescope/algorithm.rs Outdated Show resolved Hide resolved
@rrtoledo rrtoledo force-pushed the raph@bench-centralized branch from 14f4fb0 to 3fdfe36 Compare December 5, 2024 20:08
@rrtoledo rrtoledo marked this pull request as draft December 5, 2024 20:08
@rrtoledo rrtoledo force-pushed the raph@bench-centralized branch 2 times, most recently from 9f13518 to e27b0d4 Compare December 10, 2024 19:16
@rrtoledo rrtoledo marked this pull request as ready for review December 12, 2024 12:01
@rrtoledo rrtoledo force-pushed the raph@bench-centralized branch 4 times, most recently from b540e25 to a54996e Compare December 12, 2024 16:15
@rrtoledo rrtoledo changed the title Adding benchmarks Centralized Telescope - Adding benchmarks Dec 13, 2024
@rrtoledo rrtoledo force-pushed the raph@bench-centralized branch from 63a4cef to 07fc567 Compare December 13, 2024 11:09
@rrtoledo rrtoledo force-pushed the raph@bench-centralized branch from 5d2b638 to 5ab0b6f Compare December 13, 2024 15:25
Copy link
Collaborator

@curiecrypt curiecrypt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just reviewed the centralized_telescope benches.
Just very minor suggestions, other than that, it looks very good.

benches/common/criterion_helpers.rs Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
benches/centralized_telescope/proof_size.rs Show resolved Hide resolved
benches/centralized_telescope/number_steps.rs Outdated Show resolved Hide resolved
benches/centralized_telescope/number_steps.rs Outdated Show resolved Hide resolved
benches/centralized_telescope/proving_time.rs Outdated Show resolved Hide resolved
benches/centralized_telescope/proving_time.rs Outdated Show resolved Hide resolved
benches/centralized_telescope/verifying_time.rs Outdated Show resolved Hide resolved
benches/centralized_telescope/verifying_time.rs Outdated Show resolved Hide resolved
@rrtoledo rrtoledo force-pushed the raph@bench-centralized branch from 5ab0b6f to cba63c9 Compare December 13, 2024 15:29
@rrtoledo rrtoledo changed the title Adding telescope & lottery benchmarks Adding telescope benchmarks Dec 13, 2024
@rrtoledo rrtoledo mentioned this pull request Dec 13, 2024
rrtoledo and others added 22 commits December 16, 2024 11:03
@rrtoledo rrtoledo force-pushed the raph@bench-centralized branch from 350cc03 to f422cdb Compare December 16, 2024 11:06
Copy link
Collaborator

@curiecrypt curiecrypt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍

Copy link
Member

@tolikzinovyev tolikzinovyev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code structure, benchmarks duration and variable names have definitely improved, thank you!

}
}

/// Prove routing
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a typo. But also there is no need to document internal functions if the documentation doesn't say more than the function name.

@@ -52,6 +51,35 @@ pub(super) fn verify(setup: &Setup, proof: &Proof) -> bool {
proof_hash(setup, &round)
}

/// Internal module for tests and benchmarks
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// Internal module for tests and benchmarks
/// Internal module for tests and benchmarks. Do not use.

@@ -8,7 +8,8 @@ use crate::utils::types::Element;
/// The main centralized Telescope struct with prove and verify functions.
#[derive(Debug, Clone, Copy)]
pub struct Wrapper {
setup: Setup,
/// Centralized telescope internal parameters
pub setup: Setup,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this shouldn't be public. Instead of calling CentralizedTelescope::create(), we can run parameter derivation and use CentralizedTelescope::create_unsafe().

);
}

criterion_main!(criterion_group::centralized_verifying_time,);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
criterion_main!(criterion_group::centralized_verifying_time,);
criterion_main!(criterion_group::centralized_verifying_time);

@@ -0,0 +1,42 @@
//! Benchmarking the proof size of the Centralized Telescope scheme
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this originally wasn't part of the PR. Could we postpone it to a later PR?

BenchmarkId::new(
bench_name,
format!(
"params={{λsec:{}, λrel:{}, Sp:{} ({}%), n_p:{}, n_f:{}}}",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Example of current output:
params={λsec:80, λrel:80, Sp:1000 (80%), n_p:80, n_f:60}.

As before, I think this is confusing because S_p is the size of prover's input and it's actually 800. Suggest the following output instead.
params={λsec:80, λrel:80, |Sp|:800 (80%), n_p:80, n_f:60}

BenchmarkId::new(
bench_name,
format!(
"params={{λsec:{}, λrel:{}, Sp:{} ({}%), n_p:{}, n_f:{}}}",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: also suggest a space after colons

}

/// Prove routing
fn prove_routine(setup: &Setup, prover_set: &[Element]) -> (u64, u64, Option<Proof>) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't use the number of retries anywhere, right? If not, let's not return it.

let mut total_steps = 0u64;
for _ in 0..n {
// Setup
let (mut dataset, telescope) = setup(&mut rng, param);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still really think the setup should run before this loop. I think this loop (where n is typically from 1 to 4 in our benchmarks) is intended to filter out external noises like unpredictable scheduling of processes by the OS. It is not intended to reduce the inherent variance of the measured value (here, number of dfs steps). By running the setup inside the loop, the measurement distribution becomes more concentrated around the average than it actually is.

I ran some experiments. This is for Centralized - Steps/Prove/params={λsec:64, λrel:64, Sp:1000 (90%), n_p:90, n_f:60}.

Original code:
Screenshot 2024-12-27 at 12-27-12 Centralized - Steps_Prove_params {λsec 64 λrel 64 Sp 1000 (90%) n_p 90 n_f 60} - Criterion rs

Setup is moved to before the for loop:
Screenshot 2024-12-27 at 12-33-21 Centralized - Steps_Prove_params {λsec 64 λrel 64 Sp 1000 (90%) n_p 90 n_f 60} - Criterion rs

Difference:
Screenshot 2024-12-27 at 12-33-55 Centralized - Steps_Prove_params {λsec 64 λrel 64 Sp 1000 (90%) n_p 90 n_f 60} - Criterion rs

You can see that moving the setup outside the loop makes the PDF graph wider which is what it should be.

Interestingly, the mean also changes by a lot, +34%. This is due to a small number of samples, currently 100. I ran the benchmark one more time:
Screenshot 2024-12-27 at 12-51-49 Centralized - Steps_Prove_params {λsec 64 λrel 64 Sp 1000 (90%) n_p 90 n_f 60} - Criterion rs

Changing the number of samples to 1000 fixes the crazy variance. Perhaps, we should do this too.

use super::super::criterion_helpers::centralized::BenchParam;

/// This case corresponds to the minimum security requirements with optimistic set_size
const LOW_PARAM: BenchParam = BenchParam {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suppose I want to run a benchmark with particular n_p, n_f and |S_p| (size of prover's input). Currently, I would need to think hard to figure out how to correctly set the percentages and set_cardinality here.

A better option could be to have

    pub struct BenchParam {
        pub lambda_sec: f64,
        pub lambda_rel: f64,
        pub set_cardinality: u64,
        pub set_size: u64,
        pub lower_bound: u64,
    }

and have utility functions in this file that generate these structs based on percentages. There would be no low / mid / high cases in criterion_helpers.rs benchmarks(), instead they would be generated here.

What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Adding centralized benchmarks
3 participants