Skip to content

Commit

Permalink
GOMAXPROCS added to readme
Browse files Browse the repository at this point in the history
  • Loading branch information
mikebd committed Oct 10, 2023
1 parent 0bbc12a commit fc62fe3
Show file tree
Hide file tree
Showing 2 changed files with 17 additions and 1 deletion.
17 changes: 17 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ Some assumptions could be considered TODO items for future enhancement
* Search result caching would require invalidation when the file changes
* Hot searches could have dedicated caches that are eagerly refreshed when the file changes
* Testing coverage should be added for the `controller` package
* Add `GOMAXPROCS` to Dockerfile or set that in the code

## Endpoints

Expand Down Expand Up @@ -84,6 +85,7 @@ are naturally in `/var/log`.
## Running Locally

* `go run ./main <args>` - Run the main.go file with the given arguments
* <b>IMPORTANT</b>: Improve scalability with GOMAXPROCS: `GOMAXPROCS=8 go run ./main <args>`

## Run Unit Tests Locally

Expand Down Expand Up @@ -196,6 +198,21 @@ Percentage of the requests served within a certain time (ms)
### Large file with fixed text and regex matching
#### Non-default GOMAXPROCS=8
This will take a while to complete. I hope to update the results here in 1-2 hours (need to step away).
So far, > 2000 requests have completed, most in < 3 seconds.
```bash
❯ ab -c 6 -k -n 10000 'localhost/api/v1/logs/1GB-9million.log?q=|error|&r=\sfecig$'
Completed 1000 requests
Completed 2000 requests
...
```
#### Default GOMAXPROCS=1
This fails for `-n 10000` with a timeout, but "succeeds" (with poor performance) for `-n 50`.
There is a lot of room for optimization here if this is a use case that must be supported.
Expand Down
1 change: 0 additions & 1 deletion service/getLog.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,6 @@ func GetLog(directoryPath string, filename string, textMatch string, regex *rege
}

// TODO: This is the simplest possible approach. It will likely not work well for extremely large files.
// It does not scale well under concurrent load for a large file (which thrashes the filesystem page cache).
// Consider seek() near the end of the file, backwards iteratively, until the desired number of lines is found.
// This will be more efficient for large files, but will be more complex to implement and maintain.
// On my machine (non-concurrent):
Expand Down

0 comments on commit fc62fe3

Please sign in to comment.