-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
not prefetching images when not needed #8676
base: develop
Are you sure you want to change the base?
Conversation
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThe changes in the Changes
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
cvat/apps/dataset_manager/task.py (1)
127-130
: Enhance error handling with more specific exception messageThe error handling for
StopIteration
is good, but the error message could be more descriptive.Consider adding more context to the error message:
- raise models.Job.DoesNotExist() + raise models.Job.DoesNotExist(f"Job with id {pk} does not exist")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
cvat/apps/dataset_manager/task.py
(1 hunks)
🔇 Additional comments (2)
cvat/apps/dataset_manager/task.py (2)
116-124
: LGTM: Improved queryset handling to prevent memory leaks
The introduction of is_custom_queryset
flag and the modified control flow helps prevent unnecessary caching of querysets, which aligns with the PR's objective of addressing memory leaks.
116-130
: Verify performance impact of queryset changes
While the changes address memory leaks, we should verify that they don't introduce performance regressions in frequently accessed jobs.
Let's analyze the queryset usage patterns:
✅ Verification successful
Based on the search results, I can now make an informed assessment about the queryset changes and their performance impact. Let me generate the final response:
Queryset changes are safe and follow established patterns
The changes to job queryset handling are consistent with existing patterns across the codebase. Key observations:
- The modified code path in
dataset_manager/task.py
follows the same queryset optimization patterns used in other critical paths likeengine/views.py
andquality_control/quality_reports.py
- The caching behavior (
get_cached
) is used selectively only when custom querysets are provided, which is the correct approach - The prefetch logic remains intact through
add_prefetch_info()
which is crucial forJobAnnotation
performance
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for patterns of job queryset usage that might be affected by the caching changes
# Look for job queryset usage patterns
rg -A 5 "Job\.objects" cvat/apps/
# Look for potential performance-critical paths
rg -A 5 "JobAnnotation" cvat/apps/
Length of output: 39203
3ca5a3f
to
e7d7860
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #8676 +/- ##
===========================================
- Coverage 74.05% 74.05% -0.01%
===========================================
Files 409 409
Lines 43783 43792 +9
Branches 3984 3984
===========================================
+ Hits 32425 32431 +6
- Misses 11358 11361 +3
|
@Eldies, Could you please provide the difference in memory usage and number of db queries (before/after the patch)? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally, it works well for me 👍
I have only a few small comments.
cvat/apps/dataset_manager/task.py
Outdated
|
||
Prefetch('segment__task__label_set', queryset=label_qs), | ||
Prefetch('segment__task__project__label_set', queryset=label_qs), | ||
) | ||
|
||
def __init__(self, pk, *, is_prefetched=False, queryset=None): | ||
def __init__(self, pk, *, is_prefetched: bool = False, queryset: QuerySet = None, prefetch_images: bool = True): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
I guess it's not the desired approach to have both
is_prefetched
andprefetch_images
options, considering they are unrelated. Additionally, the nameis_prefetched
doesn't accurately reflect its purpose, as it appears to create a lock for the database row. -
I wonder if it would be better to set
prefetch_images=False
by default and explicitly passprefetch_images=True
only when needed?
@zhiltsov-max, I'm also unsure why we lock the job row only from TaskAnnotation
. For instance, why don't we lock the row when updating job annotations directly?
I'm talking about this code:
if is_prefetched:
self.db_job: models.Job = queryset.select_related(
'segment__task'
).select_for_update().get(id=pk)
else:
self.db_job: models.Job = get_cached(queryset, pk=int(pk))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't say, this conditional logic was added in ba74709.
For instance, why don't we lock the row when updating job annotations directly?
I'm not sure what you mean here. Could you phrase it in more detail? I can guess that the whole update was supposed to be a single request, so that no lock is needed. Maybe there was some deadlock somewhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
set prefetch_images=False by default
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as far as I understand from https://github.com/cvat-ai/cvat/pull/5160/files#diff-ed7ab63c7c54f5d87f982240b298e7830de8e01da28b819779b44bc601db6f7bR74
is_prefetched
actually is related to my prefetch_images
- it was to determine whether images should be prefetched. But somewhere along the way this behaviour was broken.
select_for_update was there earlier, from (at least) ae6a489 (in cvat/apps/engine/annotation.py) and was removed for the case when images are to be prefetched
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since is_prefetched
is only used in TaskAnnotation.init_from_db
and in all the other cases lock was removed two years ago and no problems emerged (?), I believe that the lock is not required and is_prefetched
can be removed.
Made a commit which removes it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since is_prefetched is only used in TaskAnnotation.init_from_db and in all the other cases lock was removed two years ago and no problems emerged (?), I believe that the lock is not required and is_prefetched can be removed.
I do not agree with that. Maybe such problems were. Imagine the situation when 2 parallel PATCH requests come to update task annotations. If we just remove select_for_update
then the result will be unexpected (At first glance, the same problem exists now for jobs). Probably we should not lock the database job or task row when updating annotations but use lock per annotations update action.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I investigated a bit more, and there only two cases when this lock happens on some changes in db: when a task is added to project and when a dataset is imported as a project. In all the other cases, this lock happens when no changes are applied - e.g. task export.
But the lock is required on export, according to https://github.com/cvat-ai/cvat/blob/develop/cvat/apps/dataset_manager/task.py#L1087
therefore I am returning the option, but with the other, more descriptive name
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we actually need to figure out when we don't need to lock. There must be at least the following clients of this class:
- job, task, project import, export
- job, task annotation changes (patch, put), honeypot reroll in a job (can remove annotations on the changed honeypot frames)
- quality computation (~export), backup (~export), autoannotation (~import)
It looks to me that in most, if not all, of the cases we do need to lock data.
- For exporting, the lock is needed to retrieve the relevant data from the DB. If there are multiple DB tables or if there is a batched load from a single table, we do need a lock. We also need image meta
- For importing, the lock is needed because the current logic is to overwrite the existing data. This logic was asked to be changed many times, however, this how it is supposed to be now. Image meta is needed as well, as we need to know image sizes for some of the formats and we need to match image names from the uploaded annotations
- For annotation changes in general, there is a writing step, for sure. But there is also a reading step, which is needed to compute the diff, and record it for analytics
We don't need to hold the lock for the whole operation though. The lifetime of a lock is bound to the transaction, and they are used to control it in the code. For exporting, we typically need it only to prefetch everything in the beginning. For importing, we probably need to lock 2 times - to read and to (over-)write.
cvat/apps/dataset_manager/task.py
Outdated
@@ -93,6 +94,12 @@ def add_prefetch_info(cls, queryset): | |||
]) | |||
label_qs = JobData.add_prefetch_info(label_qs) | |||
|
|||
task_data_queryset = models.Data.objects.select_related('video') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess task_data_queryset
should be models.Data.objects.all()
by default and select_related('video')
also should be called only when we need to obtain video data details. (In my case, the number of database queries is also reduced by 2 * len(jobs) for video tasks)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
cvat/apps/dataset_manager/task.py
Outdated
@@ -1018,7 +1022,7 @@ def put_job_data(pk, data): | |||
@plugin_decorator | |||
@transaction.atomic | |||
def patch_job_data(pk, data, action): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about get_job_data
, put_job_data
? Could you please check all places where JobAnnotation is used? (not only in OSS)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
checked them and set prefetch_images=True when needed
changelog.d/20241113_130658_dmitrii.lavrukhin_no_queryset_cache.md
Outdated
Show resolved
Hide resolved
0927df6
to
399cae8
Compare
9e2f4f0
to
1c2b19d
Compare
1c2b19d
to
a091103
Compare
…e.md Co-authored-by: Maria Khrustaleva <[email protected]>
Measuring consumed memory with this memray.patch
On develop: With this PR: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, it seems there are still some places that could be optimized (e.g., fetching the task data queryset once for the entire task and then passing it to each JobAnnotation), but it can be addressed in a future PRs.
If other guys @zhiltsov-max, @azhavoro, @SpecLad agree that we could delete select_for_update
here and fix potentiall issues with parallel imports in a separate PR - LGTM.
@Eldies, please, don't forget to check if any changes are required in private repositories.
Quality Gate passedIssues Measures |
I returned |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- What I think should be done in the PR is that the cache utilization should be improved. If it's possible to reuse the cache for multiple jobs, it should be reused. Please check if some manual "joins" can be useful in relevant cases (like here).
- Please review the locking logic, according to not prefetching images when not needed #8676 (comment).
- Thank you for the profiling results here not prefetching images when not needed #8676 (comment). The number of requests seems quite big to me, do you have some breakdown on what is being queried? It's approximately 100 requests per job or ~3 per image.
|
@Eldies, please try to enable the silk profiler, you'll get a picture like this: Note that you might need to use
These 2 might be optimized by #8275 or by a similar approach with changing
These are the ones I'd look more closely, it feels like some prefetching could be added. |
Co-authored-by: Maxim Zhiltsov <[email protected]>
For every case when JobAnnotation is called from TaskAnnotation, passed a db_task so that all JobAnnotations could use the same, shared db_task. Added prefetching to TaskAnnotation. But now TaskAnnotation.add_prefetch_info kinda duplicates JobAnnotation.add_prefetch_info. Something like this could remove duplication:
but it will add one request, so I am not sure is it a good idea or not. It is 5396 db requests now,
|
return queryset.select_related( | ||
'segment', | ||
'segment__task', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI, I don't think this impacts memory use heavily at the moment. It seems that using select_related
results in different Segment
and Task
objects in Python, even if they are actually the same DB line. prefetch_related
, however, results in the same objects with the same ids. As there are many segments using the same task, it makes sense to use prefetch_related
instead in such cases, if memory use is the question. prefetch_related
will result in a separate requests though.
@@ -786,11 +799,34 @@ def import_annotations(self, src_file, importer, **options): | |||
|
|||
self.create(job_data.data.slice(self.start_frame, self.stop_frame).serialize()) | |||
|
|||
|
|||
class TaskAnnotation: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please check ProjectAnnotationAndData
in project.py
as well.
Looks great! However, I still can see some suspicious numbers in these lines:
From the regex, I can guess tasks and jobs can be mixed into some other requests, but still, the numbers are big. Could you check it please? And here: Probably, it should be like 100 (job updated) + 1 (task updated) + 1 (project updated), if I'm not missing something.
Actually, maybe we could call some functions from JobAnnotation in TaskAnnotation prefetch? I think we know there that we will need jobs at some point later.
The proposed code snippet doesn't look right. We don't need all the task prefetch, if we're working with just 1 job. But we know that we're going to work with jobs, if we're working with TaskAnnotations. Do you have updated memory metrics for the import use case after optimizations? |
On every job update there is also a task update - in
On every job update, the it's task is touched and when the task is touched, it is read from db -
On each of 100+ job updates: 200 reads because in All the other reads are in Now there are 4401 db queries,
For some reason when I try to use silk, the ui does not show me my tasks. I'll try to investigate it |
Ok, it feels like a good point to stop optimizing tasks for now. Do you have updated memory measurements for the import use case? |
Memory consumption is the same as earlier, ~0.5 Gb less then on develop branch |
It's only configured for working with development setup (VS Code debug tasks), so maybe this is the reason. Make sure you're connecting to the right DB and server. You'll need to start |
Motivation and context
While importing annotations to task, all jobs of the task are loaded from db to ram. Related data is prefetched, specifically all image models which belong to the task.
As a result, each job holds its own copy of all the image models.
If there are a lot of jobs and a lot of images in the task, a lot of memory can be occupied.
And images are not utilised on annotations import/delete. Hence - do not prefetch images in these cases.
How has this been tested?
Checklist
develop
branch(cvat-canvas,
cvat-core,
cvat-data and
cvat-ui)
License
Feel free to contact the maintainers if that's a concern.
Summary by CodeRabbit
New Features
Bug Fixes
Refactor
JobAnnotation
class for clearer control flow and initialization.