-
Notifications
You must be signed in to change notification settings - Fork 127
added regenerated data processing for llama series #396
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @FrankLeeeee, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the data preparation script by integrating support for a new series of regenerated Llama datasets. It introduces new dataset options and refines the data processing logic to accommodate datasets that are already in a suitable format, thereby simplifying their inclusion in the pipeline. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for processing several new Llama series datasets. The changes involve updating the argument parser, adding logic to handle the new datasets, and modifying the data processing function to accommodate datasets that don't require a processing function. My review focuses on improving code maintainability by reducing duplication. I've suggested refactoring the data processing loop to avoid repeated code for train and test sets, and consolidating the new dataset loading logic using a dictionary to make it more scalable and readable.
| with open(train_output_jsonl_path, "w") as f: | ||
| for item in tqdm(train_ds, desc=f"Processing {dataset_name} dataset"): | ||
| row, skipped_count = proc_fn(item) | ||
| if row is None: | ||
| continue | ||
| total_skipped_count += skipped_count | ||
| if proc_fn is not None: | ||
| row, skipped_count = proc_fn(item) | ||
| if row is None: | ||
| continue | ||
| total_skipped_count += skipped_count | ||
| else: | ||
| row = item | ||
| f.write(json.dumps(row, ensure_ascii=False) + "\n") | ||
|
|
||
| if test_ds is not None: | ||
| test_output_jsonl_path = output_path.joinpath(f"{dataset_name}_test.jsonl") | ||
| with open(test_output_jsonl_path, "w") as f: | ||
| for item in tqdm(test_ds, desc=f"Processing {dataset_name} test dataset"): | ||
| row, skipped_count = proc_fn(item) | ||
| if row is None: | ||
| continue | ||
| total_skipped_count += skipped_count | ||
| if proc_fn is not None: | ||
| row, skipped_count = proc_fn(item) | ||
| if row is None: | ||
| continue | ||
| total_skipped_count += skipped_count | ||
| else: | ||
| row = item | ||
| f.write(json.dumps(row, ensure_ascii=False) + "\n") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is significant code duplication between the processing loops for the training dataset (lines 195-203) and the test dataset (lines 208-216). This makes the code harder to maintain, as any changes would need to be applied in two places.
To improve this, you could extract the common logic into a helper function. This function would take a dataset, a file handle, and the processing function (proc_fn) as arguments, and would contain the loop and writing logic. This would make process_and_save_ds cleaner and more maintainable.
| elif args.dataset == "perfectblend-llama3.1-8b-instruct": | ||
| ds = load_dataset("frankleeeee/PerfectBlend-Regenerated-Llama-3.1-8B-Instruct")[ | ||
| "train" | ||
| ] | ||
| ds = ds.map(add_index, with_indices=True) | ||
| proc_fn = None | ||
| elif args.dataset == "perfectblend-llama3.3-70b-instruct": | ||
| ds = load_dataset( | ||
| "frankleeeee/PerfectBlend-Regenerated-Llama-3.3-70B-Instruct" | ||
| )["train"] | ||
| ds = ds.map(add_index, with_indices=True) | ||
| proc_fn = None | ||
| elif args.dataset == "perfectblend-llama4-scout-instruct": | ||
| ds = load_dataset( | ||
| "frankleeeee/PerfectBlend-Regenerated-Llama-4-Scout-17B-16E-Instruct" | ||
| )["train"] | ||
| ds = ds.map(add_index, with_indices=True) | ||
| proc_fn = None | ||
| elif args.dataset == "perfectblend-llama4-maverick-instruct": | ||
| ds = load_dataset( | ||
| "frankleeeee/PerfectBlend-Regenerated-Llama-4-Maverick-17B-128E-Instruct" | ||
| )["train"] | ||
| ds = ds.map(add_index, with_indices=True) | ||
| proc_fn = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The elif chain for handling the new perfectblend-llama* datasets contains a lot of repetitive code. Each block loads a dataset from a specific Hugging Face repository, maps add_index, and sets proc_fn to None.
To make this more maintainable and easier to extend, you could use a dictionary to map the dataset names to their repository IDs. This would consolidate the logic into a single block.
| elif args.dataset == "perfectblend-llama3.1-8b-instruct": | |
| ds = load_dataset("frankleeeee/PerfectBlend-Regenerated-Llama-3.1-8B-Instruct")[ | |
| "train" | |
| ] | |
| ds = ds.map(add_index, with_indices=True) | |
| proc_fn = None | |
| elif args.dataset == "perfectblend-llama3.3-70b-instruct": | |
| ds = load_dataset( | |
| "frankleeeee/PerfectBlend-Regenerated-Llama-3.3-70B-Instruct" | |
| )["train"] | |
| ds = ds.map(add_index, with_indices=True) | |
| proc_fn = None | |
| elif args.dataset == "perfectblend-llama4-scout-instruct": | |
| ds = load_dataset( | |
| "frankleeeee/PerfectBlend-Regenerated-Llama-4-Scout-17B-16E-Instruct" | |
| )["train"] | |
| ds = ds.map(add_index, with_indices=True) | |
| proc_fn = None | |
| elif args.dataset == "perfectblend-llama4-maverick-instruct": | |
| ds = load_dataset( | |
| "frankleeeee/PerfectBlend-Regenerated-Llama-4-Maverick-17B-128E-Instruct" | |
| )["train"] | |
| ds = ds.map(add_index, with_indices=True) | |
| proc_fn = None | |
| elif args.dataset in [ | |
| "perfectblend-llama3.1-8b-instruct", | |
| "perfectblend-llama3.3-70b-instruct", | |
| "perfectblend-llama4-scout-instruct", | |
| "perfectblend-llama4-maverick-instruct", | |
| ]: | |
| dataset_map = { | |
| "perfectblend-llama3.1-8b-instruct": "frankleeeee/PerfectBlend-Regenerated-Llama-3.1-8B-Instruct", | |
| "perfectblend-llama3.3-70b-instruct": "frankleeeee/PerfectBlend-Regenerated-Llama-3.3-70B-Instruct", | |
| "perfectblend-llama4-scout-instruct": "frankleeeee/PerfectBlend-Regenerated-Llama-4-Scout-17B-16E-Instruct", | |
| "perfectblend-llama4-maverick-instruct": "frankleeeee/PerfectBlend-Regenerated-Llama-4-Maverick-17B-128E-Instruct", | |
| } | |
| repo_id = dataset_map[args.dataset] | |
| ds = load_dataset(repo_id)["train"] | |
| ds = ds.map(add_index, with_indices=True) | |
| proc_fn = None |
Motivation
Modifications
Related Issues
Accuracy Test
Benchmark & Profiling
Checklist