Skip to content

Conversation

@hh-hunter
Copy link
Contributor

Hey,

this PR for the Vuln Detector Plugin for [CVE-2023-1177] (see Issue #302)

In order to verify the vulnerability, I used two different docker environments, one for the vulnerable version and the other for the fixed version

The first step is to clone my vulnerability environment warehouse to the local

git clone https://github.com/hh-hunter/ml-CVE-2023-1177
cd ml-CVE-2023-1177

Vulnerable image:

It takes several minutes to wait for the service to be accessed normally, and the exposed web service port is [15000].

docker-compose -f docker-compose-vulnerability.yml up

Safe image:

It takes several minutes to wait for the service to be accessed normally, and the exposed web service port is [15001].

docker-compose -f docker-compose-no-vulnerability.yml up

@hh-hunter
Copy link
Contributor Author

@maoning hi,I have completed this plug-in, please review it as soon as possible

@hh-hunter
Copy link
Contributor Author

@maoning hi,How is the audit going? What seems to be the problem?

@hh-hunter
Copy link
Contributor Author

@maoning hi,How is the audit going? What seems to be the problem?

@maoning maoning self-assigned this Dec 5, 2023
@tooryx tooryx self-assigned this Dec 28, 2023
@tooryx
Copy link
Member

tooryx commented Dec 28, 2023

Hi @hh-hunter,

I will be taking a look into this PR in the upcoming days.

~tooryx

Bump the version of Tsunami used by the plugin to the latest.
Add terminating newline
With recent changes, the `NetworkServiceUtils::isWebService` is more reliable and should be enough to detect if the service is a web service.
Recent changes to `NetworkServiceUtils.buildWebApplicationRootUrl` has made it more reliable and it should be sufficient to build the target URL.
Remove unused import
Copy link
Member

@tooryx tooryx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi again @hh-hunter,

I made some changes and added some comments. Could you please take a look? And feel free to let me know if you disagree with anything.

~tooryx

@@ -0,0 +1,17 @@
# MLflow LFI/RFI CVE-2023-1177 Detector

This detector checks for MLflow LFI/RMI vulnerability (CVE-2023-1177).
Copy link
Member

@tooryx tooryx Dec 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

L3: Did you mean RFI instead of RMI?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like there was an issue and this is still marked as LFI/RMI.

.replace(REPLACE_FLAG, currentModelName);
logger.atInfo().log("currentModelName: %s", currentModelName);
try {
HttpResponse createModeResponse =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If creating the model fails, we should not try to update it with the following request. Please consider refactoring this so that requests are only performed if the previous request has been successful.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added the logic to clean up the model.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for adding the cleanup function! Although it is currently called only if: adding the model succeeded AND updating it succeeded AND reading the file failed. I think it should happen in all cases as soon as the model creation succeeded.

Also, the current code is a bit difficult to read (nested if). Please consider restructuring to exit early if something goes wrong instead, for example (pseudo-code):

HttpResponse createModeResponse = httpClient.sendAsIs(...)
if (createModeResponse.status().code() != 200 || !createModeResponse.bodyString().get().contains(DETECTION_STRING)) {
  // the model was not created, we do not need to cleanup
  return false;
}

// from this point on, the model was created, we will need cleanup for all exit paths

@hh-hunter
Copy link
Contributor Author

@tooryx Please review,thk.

Copy link
Member

@tooryx tooryx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @hh-hunter,

I performed a second pass review. Feel free to reach out if you disagree or if you have any questions.

~tooryx

.setPublisher("TSUNAMI_COMMUNITY")
.setValue("CVE_2023_1177"))
.setSeverity(Severity.CRITICAL)
.setTitle("CVE-2023-1177 MLflow RFI")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can keep both LFI/RFI here, not just RFI.

@@ -0,0 +1,17 @@
# MLflow LFI/RFI CVE-2023-1177 Detector

This detector checks for MLflow LFI/RMI vulnerability (CVE-2023-1177).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like there was an issue and this is still marked as LFI/RMI.

import static com.google.common.collect.ImmutableList.toImmutableList;
import static com.google.common.net.HttpHeaders.CONTENT_TYPE;
import static com.google.common.net.HttpHeaders.USER_AGENT;
import static com.google.tsunami.common.net.http.HttpRequest.*;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please do not use wildcard imports as they make code brittle and difficult to reason about, both for programmers and for tools.

.replace(REPLACE_FLAG, currentModelName);
logger.atInfo().log("currentModelName: %s", currentModelName);
try {
HttpResponse createModeResponse =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for adding the cleanup function! Although it is currently called only if: adding the model succeeded AND updating it succeeded AND reading the file failed. I think it should happen in all cases as soon as the model creation succeeded.

Also, the current code is a bit difficult to read (nested if). Please consider restructuring to exit early if something goes wrong instead, for example (pseudo-code):

HttpResponse createModeResponse = httpClient.sendAsIs(...)
if (createModeResponse.status().code() != 200 || !createModeResponse.bodyString().get().contains(DETECTION_STRING)) {
  // the model was not created, we do not need to cleanup
  return false;
}

// from this point on, the model was created, we will need cleanup for all exit paths

@hh-hunter
Copy link
Contributor Author

@tooryx Please review,thk.

Reorder import to alphabetical order
@tooryx
Copy link
Member

tooryx commented Jan 4, 2024

Thank you for these changes. It looks good, I will try to get the change merged next week.

~tooryx

@tooryx
Copy link
Member

tooryx commented Jan 4, 2024

Also, could you please rename the root directory of the plugin from mlflow_cve-2023-1177 to mlflow_cve_2023_1177 as hyphens are not accepted in plugin directories. This requires changing the settings.gradle file as well. I do not have direct access to push on your forked branch to perform the change myself.

@hh-hunter
Copy link
Contributor Author

Okay, can you check my other submitted issues? It has been a very long time since the last one.

@copybara-service copybara-service bot merged commit 10a2d03 into google:master Jan 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants