Skip to content

Conversation

@FrankLeeeee
Copy link
Collaborator

Motivation

Modifications

Related Issues

Accuracy Test

Benchmark & Profiling

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @FrankLeeeee, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the data preparation script by integrating support for a new series of regenerated Llama datasets. It introduces new dataset options and refines the data processing logic to accommodate datasets that are already in a suitable format, thereby simplifying their inclusion in the pipeline.

Highlights

  • New Llama Series Datasets: Added support for several new 'perfectblend-llama' series datasets, including llama3.1-8b-instruct, llama3.3-70b-instruct, llama4-scout-instruct, and llama4-maverick-instruct, expanding the range of available data for model training.
  • Flexible Data Processing: Modified the data processing function process_and_save_ds to conditionally apply a processing function (proc_fn). This allows datasets that do not require additional row-level processing to be saved directly, streamlining the data preparation workflow for pre-processed datasets.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for processing several new Llama series datasets. The changes involve updating the argument parser, adding logic to handle the new datasets, and modifying the data processing function to accommodate datasets that don't require a processing function. My review focuses on improving code maintainability by reducing duplication. I've suggested refactoring the data processing loop to avoid repeated code for train and test sets, and consolidating the new dataset loading logic using a dictionary to make it more scalable and readable.

Comment on lines 194 to 216
with open(train_output_jsonl_path, "w") as f:
for item in tqdm(train_ds, desc=f"Processing {dataset_name} dataset"):
row, skipped_count = proc_fn(item)
if row is None:
continue
total_skipped_count += skipped_count
if proc_fn is not None:
row, skipped_count = proc_fn(item)
if row is None:
continue
total_skipped_count += skipped_count
else:
row = item
f.write(json.dumps(row, ensure_ascii=False) + "\n")

if test_ds is not None:
test_output_jsonl_path = output_path.joinpath(f"{dataset_name}_test.jsonl")
with open(test_output_jsonl_path, "w") as f:
for item in tqdm(test_ds, desc=f"Processing {dataset_name} test dataset"):
row, skipped_count = proc_fn(item)
if row is None:
continue
total_skipped_count += skipped_count
if proc_fn is not None:
row, skipped_count = proc_fn(item)
if row is None:
continue
total_skipped_count += skipped_count
else:
row = item
f.write(json.dumps(row, ensure_ascii=False) + "\n")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is significant code duplication between the processing loops for the training dataset (lines 195-203) and the test dataset (lines 208-216). This makes the code harder to maintain, as any changes would need to be applied in two places.

To improve this, you could extract the common logic into a helper function. This function would take a dataset, a file handle, and the processing function (proc_fn) as arguments, and would contain the loop and writing logic. This would make process_and_save_ds cleaner and more maintainable.

Comment on lines +265 to +288
elif args.dataset == "perfectblend-llama3.1-8b-instruct":
ds = load_dataset("frankleeeee/PerfectBlend-Regenerated-Llama-3.1-8B-Instruct")[
"train"
]
ds = ds.map(add_index, with_indices=True)
proc_fn = None
elif args.dataset == "perfectblend-llama3.3-70b-instruct":
ds = load_dataset(
"frankleeeee/PerfectBlend-Regenerated-Llama-3.3-70B-Instruct"
)["train"]
ds = ds.map(add_index, with_indices=True)
proc_fn = None
elif args.dataset == "perfectblend-llama4-scout-instruct":
ds = load_dataset(
"frankleeeee/PerfectBlend-Regenerated-Llama-4-Scout-17B-16E-Instruct"
)["train"]
ds = ds.map(add_index, with_indices=True)
proc_fn = None
elif args.dataset == "perfectblend-llama4-maverick-instruct":
ds = load_dataset(
"frankleeeee/PerfectBlend-Regenerated-Llama-4-Maverick-17B-128E-Instruct"
)["train"]
ds = ds.map(add_index, with_indices=True)
proc_fn = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The elif chain for handling the new perfectblend-llama* datasets contains a lot of repetitive code. Each block loads a dataset from a specific Hugging Face repository, maps add_index, and sets proc_fn to None.

To make this more maintainable and easier to extend, you could use a dictionary to map the dataset names to their repository IDs. This would consolidate the logic into a single block.

Suggested change
elif args.dataset == "perfectblend-llama3.1-8b-instruct":
ds = load_dataset("frankleeeee/PerfectBlend-Regenerated-Llama-3.1-8B-Instruct")[
"train"
]
ds = ds.map(add_index, with_indices=True)
proc_fn = None
elif args.dataset == "perfectblend-llama3.3-70b-instruct":
ds = load_dataset(
"frankleeeee/PerfectBlend-Regenerated-Llama-3.3-70B-Instruct"
)["train"]
ds = ds.map(add_index, with_indices=True)
proc_fn = None
elif args.dataset == "perfectblend-llama4-scout-instruct":
ds = load_dataset(
"frankleeeee/PerfectBlend-Regenerated-Llama-4-Scout-17B-16E-Instruct"
)["train"]
ds = ds.map(add_index, with_indices=True)
proc_fn = None
elif args.dataset == "perfectblend-llama4-maverick-instruct":
ds = load_dataset(
"frankleeeee/PerfectBlend-Regenerated-Llama-4-Maverick-17B-128E-Instruct"
)["train"]
ds = ds.map(add_index, with_indices=True)
proc_fn = None
elif args.dataset in [
"perfectblend-llama3.1-8b-instruct",
"perfectblend-llama3.3-70b-instruct",
"perfectblend-llama4-scout-instruct",
"perfectblend-llama4-maverick-instruct",
]:
dataset_map = {
"perfectblend-llama3.1-8b-instruct": "frankleeeee/PerfectBlend-Regenerated-Llama-3.1-8B-Instruct",
"perfectblend-llama3.3-70b-instruct": "frankleeeee/PerfectBlend-Regenerated-Llama-3.3-70B-Instruct",
"perfectblend-llama4-scout-instruct": "frankleeeee/PerfectBlend-Regenerated-Llama-4-Scout-17B-16E-Instruct",
"perfectblend-llama4-maverick-instruct": "frankleeeee/PerfectBlend-Regenerated-Llama-4-Maverick-17B-128E-Instruct",
}
repo_id = dataset_map[args.dataset]
ds = load_dataset(repo_id)["train"]
ds = ds.map(add_index, with_indices=True)
proc_fn = None

@FrankLeeeee FrankLeeeee merged commit b7febe8 into sgl-project:main Dec 28, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant