-
Notifications
You must be signed in to change notification settings - Fork 214
CVE-2023-1177 RFI Vulnerability in Machine Learning Lifecycle Platform, MLflow #310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CVE-2023-1177 RFI Vulnerability in Machine Learning Lifecycle Platform, MLflow #310
Conversation
|
@maoning hi,I have completed this plug-in, please review it as soon as possible |
|
@maoning hi,How is the audit going? What seems to be the problem? |
|
@maoning hi,How is the audit going? What seems to be the problem? |
|
Hi @hh-hunter, I will be taking a look into this PR in the upcoming days. ~tooryx |
Bump the version of Tsunami used by the plugin to the latest.
Add terminating newline
With recent changes, the `NetworkServiceUtils::isWebService` is more reliable and should be enough to detect if the service is a web service.
Recent changes to `NetworkServiceUtils.buildWebApplicationRootUrl` has made it more reliable and it should be sufficient to build the target URL.
Remove unused import
tooryx
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi again @hh-hunter,
I made some changes and added some comments. Could you please take a look? And feel free to let me know if you disagree with anything.
~tooryx
| @@ -0,0 +1,17 @@ | |||
| # MLflow LFI/RFI CVE-2023-1177 Detector | |||
|
|
|||
| This detector checks for MLflow LFI/RMI vulnerability (CVE-2023-1177). | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
L3: Did you mean RFI instead of RMI?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like there was an issue and this is still marked as LFI/RMI.
...src/main/java/com/google/tsunami/plugins/detectors/cves/cve20231177/Cve20231177Detector.java
Outdated
Show resolved
Hide resolved
| .replace(REPLACE_FLAG, currentModelName); | ||
| logger.atInfo().log("currentModelName: %s", currentModelName); | ||
| try { | ||
| HttpResponse createModeResponse = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If creating the model fails, we should not try to update it with the following request. Please consider refactoring this so that requests are only performed if the previous request has been successful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added the logic to clean up the model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for adding the cleanup function! Although it is currently called only if: adding the model succeeded AND updating it succeeded AND reading the file failed. I think it should happen in all cases as soon as the model creation succeeded.
Also, the current code is a bit difficult to read (nested if). Please consider restructuring to exit early if something goes wrong instead, for example (pseudo-code):
HttpResponse createModeResponse = httpClient.sendAsIs(...)
if (createModeResponse.status().code() != 200 || !createModeResponse.bodyString().get().contains(DETECTION_STRING)) {
// the model was not created, we do not need to cleanup
return false;
}
// from this point on, the model was created, we will need cleanup for all exit pathschange vulnerability desc
change: Add cleanup logic
|
@tooryx Please review,thk. |
tooryx
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @hh-hunter,
I performed a second pass review. Feel free to reach out if you disagree or if you have any questions.
~tooryx
| .setPublisher("TSUNAMI_COMMUNITY") | ||
| .setValue("CVE_2023_1177")) | ||
| .setSeverity(Severity.CRITICAL) | ||
| .setTitle("CVE-2023-1177 MLflow RFI") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can keep both LFI/RFI here, not just RFI.
| @@ -0,0 +1,17 @@ | |||
| # MLflow LFI/RFI CVE-2023-1177 Detector | |||
|
|
|||
| This detector checks for MLflow LFI/RMI vulnerability (CVE-2023-1177). | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like there was an issue and this is still marked as LFI/RMI.
| import static com.google.common.collect.ImmutableList.toImmutableList; | ||
| import static com.google.common.net.HttpHeaders.CONTENT_TYPE; | ||
| import static com.google.common.net.HttpHeaders.USER_AGENT; | ||
| import static com.google.tsunami.common.net.http.HttpRequest.*; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please do not use wildcard imports as they make code brittle and difficult to reason about, both for programmers and for tools.
| .replace(REPLACE_FLAG, currentModelName); | ||
| logger.atInfo().log("currentModelName: %s", currentModelName); | ||
| try { | ||
| HttpResponse createModeResponse = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for adding the cleanup function! Although it is currently called only if: adding the model succeeded AND updating it succeeded AND reading the file failed. I think it should happen in all cases as soon as the model creation succeeded.
Also, the current code is a bit difficult to read (nested if). Please consider restructuring to exit early if something goes wrong instead, for example (pseudo-code):
HttpResponse createModeResponse = httpClient.sendAsIs(...)
if (createModeResponse.status().code() != 200 || !createModeResponse.bodyString().get().contains(DETECTION_STRING)) {
// the model was not created, we do not need to cleanup
return false;
}
// from this point on, the model was created, we will need cleanup for all exit paths|
@tooryx Please review,thk. |
Reorder import to alphabetical order
|
Thank you for these changes. It looks good, I will try to get the change merged next week. ~tooryx |
|
Also, could you please rename the root directory of the plugin from |
…lugin/add_cve_2023_1177
|
Okay, can you check my other submitted issues? It has been a very long time since the last one. |
Hey,
this PR for the Vuln Detector Plugin for [CVE-2023-1177] (see Issue #302)
In order to verify the vulnerability, I used two different docker environments, one for the vulnerable version and the other for the fixed version
The first step is to clone my vulnerability environment warehouse to the local
Vulnerable image:
Safe image: