Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

not prefetching images when not needed #8676

Open
wants to merge 21 commits into
base: develop
Choose a base branch
from
Open

Conversation

Eldies
Copy link
Contributor

@Eldies Eldies commented Nov 11, 2024

Motivation and context

While importing annotations to task, all jobs of the task are loaded from db to ram. Related data is prefetched, specifically all image models which belong to the task.
As a result, each job holds its own copy of all the image models.

If there are a lot of jobs and a lot of images in the task, a lot of memory can be occupied.
And images are not utilised on annotations import/delete. Hence - do not prefetch images in these cases.

How has this been tested?

Checklist

  • I submit my changes into the develop branch
  • I have created a changelog fragment
  • I have updated the documentation accordingly
  • I have added tests to cover my changes
  • I have linked related issues (see GitHub docs)
  • I have increased versions of npm packages if it is necessary
    (cvat-canvas,
    cvat-core,
    cvat-data and
    cvat-ui)

License

  • I submit my code changes under the same MIT License that covers the project.
    Feel free to contact the maintainers if that's a concern.

Summary by CodeRabbit

  • New Features

    • Enhanced job retrieval process with improved error handling.
    • Introduced a mechanism for custom querysets in job initialization.
  • Bug Fixes

    • Improved robustness in job fetching to prevent failures when jobs are not found.
  • Refactor

    • Updated logic in the JobAnnotation class for clearer control flow and initialization.

Copy link
Contributor

coderabbitai bot commented Nov 11, 2024

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

The changes in the JobAnnotation class within the cvat/apps/dataset_manager/task.py file introduce a new variable, is_custom_queryset, to manage job retrieval based on the type of queryset provided. The control flow for job fetching is modified to handle different conditions, enhancing error handling with a try-except block for StopIteration. Additionally, the import_annotations method is updated to initialize job data with a new JobData instance, ensuring proper job annotation creation. The overall structure remains intact while improving robustness and clarity.

Changes

File Path Change Summary
cvat/apps/dataset_manager/task.py Introduced is_custom_queryset in JobAnnotation class; modified job fetching logic and error handling; updated import_annotations method to use JobData.

Poem

In the garden where jobs do play,
A custom path now lights the way.
With queries clear and errors caught,
Annotations bloom, as they ought.
So hop along, let changes ring,
For every job, a joyful spring! 🐇✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (1)
cvat/apps/dataset_manager/task.py (1)

127-130: Enhance error handling with more specific exception message

The error handling for StopIteration is good, but the error message could be more descriptive.

Consider adding more context to the error message:

-                raise models.Job.DoesNotExist()
+                raise models.Job.DoesNotExist(f"Job with id {pk} does not exist")
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between d315485 and 3ca5a3f.

📒 Files selected for processing (1)
  • cvat/apps/dataset_manager/task.py (1 hunks)
🔇 Additional comments (2)
cvat/apps/dataset_manager/task.py (2)

116-124: LGTM: Improved queryset handling to prevent memory leaks

The introduction of is_custom_queryset flag and the modified control flow helps prevent unnecessary caching of querysets, which aligns with the PR's objective of addressing memory leaks.


116-130: Verify performance impact of queryset changes

While the changes address memory leaks, we should verify that they don't introduce performance regressions in frequently accessed jobs.

Let's analyze the queryset usage patterns:

✅ Verification successful

Based on the search results, I can now make an informed assessment about the queryset changes and their performance impact. Let me generate the final response:

Queryset changes are safe and follow established patterns

The changes to job queryset handling are consistent with existing patterns across the codebase. Key observations:

  • The modified code path in dataset_manager/task.py follows the same queryset optimization patterns used in other critical paths like engine/views.py and quality_control/quality_reports.py
  • The caching behavior (get_cached) is used selectively only when custom querysets are provided, which is the correct approach
  • The prefetch logic remains intact through add_prefetch_info() which is crucial for JobAnnotation performance
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for patterns of job queryset usage that might be affected by the caching changes

# Look for job queryset usage patterns
rg -A 5 "Job\.objects" cvat/apps/

# Look for potential performance-critical paths
rg -A 5 "JobAnnotation" cvat/apps/

Length of output: 39203

@Eldies Eldies changed the title not using queryset cache when not needed not prefetching images when not needed Nov 12, 2024
@codecov-commenter
Copy link

codecov-commenter commented Nov 12, 2024

Codecov Report

Attention: Patch coverage is 93.75000% with 2 lines in your changes missing coverage. Please review.

Project coverage is 74.05%. Comparing base (7d0205b) to head (021de6e).
Report is 1 commits behind head on develop.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #8676      +/-   ##
===========================================
- Coverage    74.05%   74.05%   -0.01%     
===========================================
  Files          409      409              
  Lines        43783    43792       +9     
  Branches      3984     3984              
===========================================
+ Hits         32425    32431       +6     
- Misses       11358    11361       +3     
Components Coverage Δ
cvat-ui 78.36% <ø> (-0.01%) ⬇️
cvat-server 70.35% <94.28%> (+<0.01%) ⬆️

@Marishka17
Copy link
Contributor

@Eldies, Could you please provide the difference in memory usage and number of db queries (before/after the patch)?

Copy link
Contributor

@Marishka17 Marishka17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally, it works well for me 👍
I have only a few small comments.


Prefetch('segment__task__label_set', queryset=label_qs),
Prefetch('segment__task__project__label_set', queryset=label_qs),
)

def __init__(self, pk, *, is_prefetched=False, queryset=None):
def __init__(self, pk, *, is_prefetched: bool = False, queryset: QuerySet = None, prefetch_images: bool = True):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • I guess it's not the desired approach to have both is_prefetched and prefetch_images options, considering they are unrelated. Additionally, the name is_prefetched doesn't accurately reflect its purpose, as it appears to create a lock for the database row.

  • I wonder if it would be better to set prefetch_images=False by default and explicitly pass prefetch_images=True only when needed?

@zhiltsov-max, I'm also unsure why we lock the job row only from TaskAnnotation. For instance, why don't we lock the row when updating job annotations directly?
I'm talking about this code:

if is_prefetched:
    self.db_job: models.Job = queryset.select_related(
        'segment__task'
    ).select_for_update().get(id=pk)
else:
    self.db_job: models.Job = get_cached(queryset, pk=int(pk))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't say, this conditional logic was added in ba74709.

For instance, why don't we lock the row when updating job annotations directly?

I'm not sure what you mean here. Could you phrase it in more detail? I can guess that the whole update was supposed to be a single request, so that no lock is needed. Maybe there was some deadlock somewhere.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

set prefetch_images=False by default

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as far as I understand from https://github.com/cvat-ai/cvat/pull/5160/files#diff-ed7ab63c7c54f5d87f982240b298e7830de8e01da28b819779b44bc601db6f7bR74
is_prefetched actually is related to my prefetch_images - it was to determine whether images should be prefetched. But somewhere along the way this behaviour was broken.

select_for_update was there earlier, from (at least) ae6a489 (in cvat/apps/engine/annotation.py) and was removed for the case when images are to be prefetched

Copy link
Contributor Author

@Eldies Eldies Nov 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since is_prefetched is only used in TaskAnnotation.init_from_db and in all the other cases lock was removed two years ago and no problems emerged (?), I believe that the lock is not required and is_prefetched can be removed.

Made a commit which removes it

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since is_prefetched is only used in TaskAnnotation.init_from_db and in all the other cases lock was removed two years ago and no problems emerged (?), I believe that the lock is not required and is_prefetched can be removed.

I do not agree with that. Maybe such problems were. Imagine the situation when 2 parallel PATCH requests come to update task annotations. If we just remove select_for_update then the result will be unexpected (At first glance, the same problem exists now for jobs). Probably we should not lock the database job or task row when updating annotations but use lock per annotations update action.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I investigated a bit more, and there only two cases when this lock happens on some changes in db: when a task is added to project and when a dataset is imported as a project. In all the other cases, this lock happens when no changes are applied - e.g. task export.
But the lock is required on export, according to https://github.com/cvat-ai/cvat/blob/develop/cvat/apps/dataset_manager/task.py#L1087
therefore I am returning the option, but with the other, more descriptive name

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we actually need to figure out when we don't need to lock. There must be at least the following clients of this class:

  • job, task, project import, export
  • job, task annotation changes (patch, put), honeypot reroll in a job (can remove annotations on the changed honeypot frames)
  • quality computation (~export), backup (~export), autoannotation (~import)

It looks to me that in most, if not all, of the cases we do need to lock data.

  • For exporting, the lock is needed to retrieve the relevant data from the DB. If there are multiple DB tables or if there is a batched load from a single table, we do need a lock. We also need image meta
  • For importing, the lock is needed because the current logic is to overwrite the existing data. This logic was asked to be changed many times, however, this how it is supposed to be now. Image meta is needed as well, as we need to know image sizes for some of the formats and we need to match image names from the uploaded annotations
  • For annotation changes in general, there is a writing step, for sure. But there is also a reading step, which is needed to compute the diff, and record it for analytics

We don't need to hold the lock for the whole operation though. The lifetime of a lock is bound to the transaction, and they are used to control it in the code. For exporting, we typically need it only to prefetch everything in the beginning. For importing, we probably need to lock 2 times - to read and to (over-)write.

@@ -93,6 +94,12 @@ def add_prefetch_info(cls, queryset):
])
label_qs = JobData.add_prefetch_info(label_qs)

task_data_queryset = models.Data.objects.select_related('video')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess task_data_queryset should be models.Data.objects.all() by default and select_related('video') also should be called only when we need to obtain video data details. (In my case, the number of database queries is also reduced by 2 * len(jobs) for video tasks)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -1018,7 +1022,7 @@ def put_job_data(pk, data):
@plugin_decorator
@transaction.atomic
def patch_job_data(pk, data, action):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about get_job_data, put_job_data? Could you please check all places where JobAnnotation is used? (not only in OSS)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

checked them and set prefetch_images=True when needed

@Eldies Eldies force-pushed the dl/no-queryset-cache branch 2 times, most recently from 9e2f4f0 to 1c2b19d Compare November 17, 2024 21:12
@Eldies
Copy link
Contributor Author

Eldies commented Nov 19, 2024

Measuring consumed memory with this memray.patch
With the task and annotations attached to the issue (100 jobs, 4098 images)

  1. run locally (in docker)
  2. import task from backup
  3. apply patch
  4. python manage.py shell
# preparations
In [1]: import cvat
In [2]: from django.db import reset_queries, connections
In [3]: cvat.apps.dataset_manager.task.delete_task_data(<task id>)
In [4]: reset_queries()
# run import (and measure memory consumption)
In [5]: cvat.apps.dataset_manager.task.import_task_annotations(<path to annotations archive>, <task id>, 'Segmentation mask 1.1', True)
# count queries
In [6]: sum([len(conn.queries) for conn in connections.all()])

On develop:
Peak memory usage: 2.5 GiB
Number of queries: 12287

With this PR:
Peak memory usage: 2.0 GiB
Number of queries: 11287

Copy link
Contributor

@Marishka17 Marishka17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, it seems there are still some places that could be optimized (e.g., fetching the task data queryset once for the entire task and then passing it to each JobAnnotation), but it can be addressed in a future PRs.

If other guys @zhiltsov-max, @azhavoro, @SpecLad agree that we could delete select_for_update here and fix potentiall issues with parallel imports in a separate PR - LGTM.
@Eldies, please, don't forget to check if any changes are required in private repositories.

Copy link

sonarcloud bot commented Nov 21, 2024

@Eldies
Copy link
Contributor Author

Eldies commented Nov 21, 2024

I returned select_for_update because apparently it is needed on export. But I changed the argument name to more descriptive and removed unnecessary differences in the conditional logic for locking

Copy link
Contributor

@zhiltsov-max zhiltsov-max left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • What I think should be done in the PR is that the cache utilization should be improved. If it's possible to reuse the cache for multiple jobs, it should be reused. Please check if some manual "joins" can be useful in relevant cases (like here).
  • Please review the locking logic, according to not prefetching images when not needed #8676 (comment).
  • Thank you for the profiling results here not prefetching images when not needed #8676 (comment). The number of requests seems quite big to me, do you have some breakdown on what is being queried? It's approximately 100 requests per job or ~3 per image.

@Eldies
Copy link
Contributor Author

Eldies commented Nov 24, 2024

do you have some breakdown on what is being queried? It's approximately 100 requests per job or ~3 per image.

In: Counter([
    m.groups()
    for query in connection.queries
    for m in [re.search('FROM "([^"]+)"', query['sql'])]
    if m and len(m.groups()) >= 1
])
Out: 
Counter({('engine_label',): 2002,
         ('engine_job',): 1526,
         ('engine_data',): 1001,
         ('engine_image',): 1001,
         ('engine_skeleton',): 1001,
         ('engine_attributespec',): 1001,
         ('auth_user',): 1001,
         ('organizations_organization',): 1000,
         ('webhooks_webhook',): 216,
         ('engine_labeledimage',): 200,
         ('engine_labeledshape',): 200,
         ('engine_labeledtrack',): 200,
         ('engine_task',): 109,
         ('engine_trackedshapeattributeval',): 100,
         ('engine_trackedshape',): 100,
         ('engine_labeledtrackattributeval',): 100,
         ('engine_labeledshapeattributeval',): 100,
         ('engine_labeledimageattributeval',): 100,
         ('engine_validationlayout',): 1,
         ('engine_video',): 1,
         ('engine_segment',): 1})

@zhiltsov-max
Copy link
Contributor

zhiltsov-max commented Nov 25, 2024

@Eldies, please try to enable the silk profiler, you'll get a picture like this:

image

Note that you might need to use @silk_profile or call the import or export function directly from some API endpoint to collect the metrics, instead of calling it in an rq job.

     ('auth_user',): 1001,
     ('organizations_organization',): 1000,

These 2 might be optimized by #8275 or by a similar approach with changing subj.obj.id to subj.obj_id.

('engine_label',): 2002,
('engine_job',): 1526,
('engine_data',): 1001,
('engine_image',): 1001,
('engine_skeleton',): 1001,
('engine_task',): 109,

These are the ones I'd look more closely, it feels like some prefetching could be added.

@Eldies
Copy link
Contributor Author

Eldies commented Nov 26, 2024

For every case when JobAnnotation is called from TaskAnnotation, passed a db_task so that all JobAnnotations could use the same, shared db_task.

Added prefetching to TaskAnnotation. But now TaskAnnotation.add_prefetch_info kinda duplicates JobAnnotation.add_prefetch_info. Something like this could remove duplication:

-        return queryset.select_related(
-            'segment',
-            'segment__task',
-        ).prefetch_related(
-            'segment__task__project',
-            'segment__task__owner',
-             ....
-       )
+        return queryset.select_related(
+            'segment',
+        ).prefetch_related(
+            Prefetch('segment__task', queryset=TaskAnnotation.add_prefetch_info(models.Task.objects))
+        )

but it will add one request, so I am not sure is it a good idea or not.

It is 5396 db requests now,

In: Counter([
    m.groups()
    for query in connection.queries
    for m in [re.search('FROM "([^"]+)"', query['sql'])]
    if m and len(m.groups()) >= 1
])
Out: 
Counter({('engine_job',): 1419,
         ('webhooks_webhook',): 216,
         ('engine_labeledimage',): 200,
         ('engine_labeledshape',): 200,
         ('engine_labeledtrack',): 200,
         ('engine_task',): 109,
         ('engine_trackedshapeattributeval',): 100,
         ('engine_trackedshape',): 100,
         ('engine_labeledtrackattributeval',): 100,
         ('engine_labeledshapeattributeval',): 100,
         ('engine_labeledimageattributeval',): 100,
         ('engine_label',): 2,
         ('auth_user',): 1,
         ('engine_data',): 1,
         ('engine_image',): 1,
         ('engine_skeleton',): 1,
         ('engine_attributespec',): 1,
         ('engine_validationlayout',): 1,
         ('engine_segment',): 1,
         ('organizations_organization',): 1})

Comment on lines 103 to 105
return queryset.select_related(
'segment',
'segment__task',
Copy link
Contributor

@zhiltsov-max zhiltsov-max Nov 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, I don't think this impacts memory use heavily at the moment. It seems that using select_related results in different Segment and Task objects in Python, even if they are actually the same DB line. prefetch_related, however, results in the same objects with the same ids. As there are many segments using the same task, it makes sense to use prefetch_related instead in such cases, if memory use is the question. prefetch_related will result in a separate requests though.

@@ -786,11 +799,34 @@ def import_annotations(self, src_file, importer, **options):

self.create(job_data.data.slice(self.start_frame, self.stop_frame).serialize())


class TaskAnnotation:
Copy link
Contributor

@zhiltsov-max zhiltsov-max Nov 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please check ProjectAnnotationAndData in project.py as well.

@zhiltsov-max
Copy link
Contributor

zhiltsov-max commented Nov 26, 2024

It is 5396 db requests now

Looks great! However, I still can see some suspicious numbers in these lines:

('engine_task',): 109,
('engine_job',): 1419,

From the regex, I can guess tasks and jobs can be mixed into some other requests, but still, the numbers are big. Could you check it please?

And here:
('webhooks_webhook',): 216,

Probably, it should be like 100 (job updated) + 1 (task updated) + 1 (project updated), if I'm not missing something.

But now TaskAnnotation.add_prefetch_info kinda duplicates JobAnnotation.add_prefetch_info.

Actually, maybe we could call some functions from JobAnnotation in TaskAnnotation prefetch? I think we know there that we will need jobs at some point later.

but it will add one request, so I am not sure is it a good idea or not.

The proposed code snippet doesn't look right. We don't need all the task prefetch, if we're working with just 1 job. But we know that we're going to work with jobs, if we're working with TaskAnnotations.

Do you have updated memory metrics for the import use case after optimizations?

@Eldies
Copy link
Contributor Author

Eldies commented Nov 27, 2024

And here: ('webhooks_webhook',): 216, Probably, it should be like 100 (job updated)

On every job update there is also a task update - in JobAnnotation._set_updated_date both the job and its task are touched.
There are 100+ job updates - therefore, 100+ task updates.
Not exactly 100, because there are a lot of annotations and TaskAnnotation._patch_data is called several times, and some jobs are updated twice.
So, 200+ webhook reads.

('engine_task',): 109,

On every job update, the it's task is touched and when the task is touched, it is read from db - pre_save_resource_event reads old instance.

('engine_job',): 1419,

On each of 100+ job updates:
1 read in pre_save_resource_event
1 read in __save_job_handler
1 read in __save_job__update_quality_metrics

200 reads because in TaskData.meta_for_task jobs were not ordered when prefetced for segments, and each db_segment.job_set.first() caused a db query. Fixed it.

All the other reads are in JobAnnotation initializations.
Redesigned it - now TaskAnnotations loads all the jobs it needs along with all needed data once, and then passes jobs to JobAnnotation

Now there are 4401 db queries,

Counter({('engine_job',): 318,
         ('webhooks_webhook',): 216,
         ('engine_labeledimage',): 200,
         ('engine_labeledshape',): 200,
         ('engine_labeledtrack',): 200,
         ('engine_task',): 109,
         ('engine_trackedshapeattributeval',): 100,
         ('engine_trackedshape',): 100,
         ('engine_labeledtrackattributeval',): 100,
         ('engine_labeledshapeattributeval',): 100,
         ('engine_labeledimageattributeval',): 100,
         ('organizations_organization',): 100,
         ('engine_label',): 4,
         ('engine_data',): 2,
         ('engine_skeleton',): 2,
         ('engine_attributespec',): 2,
         ('auth_user',): 2,
         ('engine_image',): 1,
         ('engine_validationlayout',): 1,
         ('engine_video',): 1,
         ('engine_segment',): 1})

For some reason when I try to use silk, the ui does not show me my tasks. I'll try to investigate it

@zhiltsov-max
Copy link
Contributor

Ok, it feels like a good point to stop optimizing tasks for now. Do you have updated memory measurements for the import use case?

@Eldies
Copy link
Contributor Author

Eldies commented Nov 27, 2024

Memory consumption is the same as earlier, ~0.5 Gb less then on develop branch

@zhiltsov-max
Copy link
Contributor

For some reason when I try to use silk, the ui does not show me my tasks. I'll try to investigate it

It's only configured for working with development setup (VS Code debug tasks), so maybe this is the reason. Make sure you're connecting to the right DB and server. You'll need to start docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d --build cvat_opa cvat_db cvat_redis_inmem cvat_redis_ondisk cvat_server cvat_vector, then launch the server debug task in VS code, launch the UI manually with (cd cvat-ui && yarn run start), then go to localhost:3000 in the browser.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants