-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ARM64 docker build #31
Conversation
maksimstojkovic
commented
Sep 3, 2021
- Consolidated dockerfile into fewer layers
- Switched from chrome to chromium webdriver
- Updated script file execution permissions
- Basic authetication tested successfully
- Tested on AMD64 Windows PC and ARM64 Ubuntu 20.04.2 Server 64-bit RPi4
- Currently does not support ARMHF due to cryptography rust dependencies
- Using CMD in Dockerfile as run.sh currently does not handle arguments
* Switched from chrome to chromium webdriver * Updated script file execution permissions * Basic authetication tested successfully * Tested on AMD64 Windows PC and ARM64 Ubuntu 20.04.2 Server 64-bit RPi4 * Currently does not support ARMHF due to cryptography rust dependencies
An image of the build to Docker Hub as
|
Hey @maksimstojkovic what a fantastic contribution! 👏🎉 We can certainly get it merged in and I'm happy you figured out the way to do it. Let me address a few things first:
Apart from these questions, I just want to compliment you on tidying up and simplifying the Dockerfile. I've been meaning to do so at some point, so it's great to see such contribution to IBeam - thank you! 👏 |
Hi @Voyz, Glad to hear the changes are of use. Below are answers to your questions:
One notable observation was that sometimes when running the image on an ARM64 RPi 4 2GB, the image will randomly hang for 10 minutes during startup when trying to launch the auth webpage (see logs below). I'm not entirely sure what the cause of this issue is, but after the 10 minutes, the image seems to resume normal operation. I haven't had much time to look into it too closely. Let me know what you think. x64 (amd64) Logs
ARM64 (arm64) Logs
ARM64 (arm64) Logs with WebDriver Error
|
* Fixes SessionNotCreatedException error observed on ARM64 platform
Hi @Voyz, I've done some further investigation and testing and believe the WebDriver error previously mentioned should now be fixed with the latest commit. I have also pushed the updated image to Docker Hub for inspection and testing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The following was used as a reference for solving the issue:
https://stackoverflow.com/questions/48450594/selenium-timed-out-receiving-message-from-renderer
Great work @maksimstojkovic 👏 Thanks for answering these questions I had. Let me address some of your responses
Have you used it with self-generated SSL certificates though? Have a look at this doc on TLS certificates for more context. From your logs I can see that the requests are
No worries if not, I just want to understand your answer.
Damn, you're very much correct, sorry. I've extracted it as it caused some build issues and I found having it as a separate layer was easier to figure out what was wrong. I'm nevertheless surprised you don't need to install
This isn't new, we've seen this behaviour before, although I thought we managed to eliminate it. You can read about fixing this issue on the current latest image starting with the comment: I'm happy to hear your additions have fixed the issue on the machines you're testing it on. Give me some time to test your image out, let it run for a while and if things will be in order we should be able to integrate it. Keep in touch if you run into further issues or have other observations. Once again, fantastic job on this mate 👍 |
Hi @Voyz, Yes, you're correct. I had not installed my own SSL certificates, but assumed that it was working as intended as the logs did not display any critical issues related to SSL when using the certificates generated by the gateway. Regarding Will keep you posted if I encounter anything interesting. |
@maksimstojkovic I've managed to successfully run your image on my development machine as well as on a cloud instance - all seems to be in order 👏 I'm going to let it run for a bit longer to give it a chance to break up a bit, but things are looking good. Will keep you posted 👍 |
@maksimstojkovic I've had your image running for the past several weeks and I've observed no discrepancies when compared to the current I've tried rebuilding the image using my source and the updated Dockerfile - could you give it a shot and see if it works for you? Also, changed the base of this PR to |
Hi @Voyz, That's awesome! I just had a look at the image on DockerHub and it seems like it was only built for To setup multi-platform builds for both
Note: This only needs to be executed once on the machine running the builds. To build the image for testing on your local machine, run:
To build a multi-platform image and push to DockerHub, run:
Note: This command will not work when trying to load the image onto a local machine, and will only work when pushing to a repository. |
Snap, my bad @maksimstojkovic thanks for catching it! 😅 Thanks for outlining the process in so much detail, that's very useful. I'll rebuild and update the image 👍 |
hey @maksimstojkovic I've just built and pushed a new image that should support arm: |
Hey @maksimstojkovic - would this build also work on ARMv7 Pis? I think the standard Pi OS from the Pi foundation is ARMv7 not ARM64 |
Hi @Voyz, I managed to pull the image on a RPI 4 running 64-bit Ubuntu 20.04 Server, and it runs without any exec format errors. However, I'm getting the logs below which show the gateway repeatedly failing to recognise that authentication has occurred, and I am of what the cause is. The same problem occurs when running the If I recall correctly, a similar/the same problem has been previously observed in an issue but a reliable solution has not yet beeb found, right? Note: The logs below are from the ARM64 (arm64) IBeam Version 0.4.0-rc2 Logs
|
Hey @codegamc, From what I've tested, there seem to be dependency issues with the There may be a workaround which you could look into, but I'm unsure how long it would remain viable given how processor technology is evolving. |
Hey @Voyz, I just tried running it again and it seems to be working as expected. Still not sure of the cause, but it appears to be intermittent and unrelated to Ibeam. Otherwise the multiplatform image appears to be working as expected. 👍 |
hey @maksimstojkovic fantastic, thanks for trying it out - super happy to hear that it's working for you as expected! The 'the gateway repeatedly failing to recognise that authentication has occurred' is the common problem that users are experiencing with the Gateway, something which the new logout/restart logic is trying to overcome - you can see it in your logs. It's been reported that doing so can help with the issue, so we're trying it out. You can read more about it here #19 In either case, I'd say that the multiimage build you proposed, as well as the Dockerfile changes are working and ready to be merged. Fantastic work man, thank you for taking the initiative and proposing this contribution, as I'm sure it will benefit a lot of users who use arm. Just before we go ahead with merging this PR, could I ask you for outlining the buildx process in the CONTRIBUTING.md? I can copy-paste the guideline you gave me in this PR, but I thought as you clearly know more about this than I do, you may be the better person to write it up. Probably this segment will need to be expanded upon: Thanks once again! 😊 |
* Removed unused `invoke` build requirements
Hey @Voyz, The CONTRIBUTING.md guidelines have now been updated as requested. I removed the Let me know if any changes are required. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @maksimstojkovic sorry for taking so long to review this - crunch time at work. Fantastic job on editing the CONTRIBUTING.md 😊👏 I've left a few suggestions, let me know what you think 👍
To build a multi-platform image and push to a Docker repository, run: | ||
|
||
```shell | ||
docker buildx build --platform linux/amd64,linux/arm64 -t <repo-username>/ibeam:<tag> --push . |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great stuff 👍 I'd just drop the --push
flag from this command. Firstly, I don't think others would have permissions to push into voyz/ibeam
and secondly we'd want the process to involve some testing between building and pushing the image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that they won't be pushing to voyz/ibeam
, but without this command, the file doesn't really explain how to create a multi-platform build which can be used for testing. Hence, the format is for a generic repo-username
.
The --load
flag only builds the image for the building machine's own platform, and will cause exec error
s on other platforms.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, wouldn't testing come after the build, possibly in a different section?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey yeah, so I think we agree here, right? Build, then test, then push. So that would be
docker buildx build --platform linux/amd64,linux/arm64 -t <repo-username>/ibeam:<tag> .
Would calling it without --push
not work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe we agree.
My understanding is that with neither --load
nor --push
the image is left in the build cache and is not visible, regardless of whether it is multi-platform or not.
When --load
is used, the image for the building platform is loaded, and visible when running docker images
. Note that this is only compatible from the platform of the building machine.
When --push
is used, the multi-platform image is pushed to a repository, and any compatible platform will automatically pull the correct version using docker pull <image>:<tag>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I understand what you mean now. Damn, is there any way around it? Can't it be pushed later?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, maybe there's been some miscommunication.
If you wanted a build, test, push workflow (and this is referring to the general user who intends to push to their own repo), you would build using --load
, run any tests required (assuming the tests are run on the building machine), then build using --push
to deploy to an image repository.
If the tests had to be run on a different platform (e.g. build on amd64
and test on arm64
), the order would be build with --push
, pull the image from the repository using the testing machine, and then run tests. Alternatively they could build directly on the testing platform, as described above. From testing this is a lot slower on things like a Raspberry Pi 4, may be less of an issue on a Mac.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right that makes sense, thanks for explaining it in more detail. As such, I'd be reluctant to suggest build -> push -> pull -> test
as the default development cycle.
Maybe we can outline two workflows: first one for development, where we recommend to use --load
, and then later when ready to push the one with --push
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hey @maksimstojkovic what do you recon about the above idea?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Voyz I've separated the build workflows in my latest commit, please have a look.
@@ -147,29 +147,61 @@ from the main (upstream) repository: | |||
git pull --ff upstream master | |||
``` | |||
|
|||
## <a name="building-docker"></a>Building a Docker image | |||
In order to build a Docker image of IBeam, you need to first ensure both Chrome Driver and CPW Gateway are available in `./copy_cache` directory under: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd probably opt for keeping this information as it was - although happy to discuss it if you feel otherwise - as it is essential to completing a build, and may not be straightforward to derive without having it pointed out specifically, as we never mention /copy_cache
folder and its purpose elsewhere. Also, you reason that:
I removed the invoke requirements from the repo as the client gateway is already available in the copy_cache directory, and chromium is now installed during Docker builds.
While it's true for the client gateway that it's included in the repo and that chromium is installed during builds, the chrome driver still is provided through copy_cache and its copying can be automated by running the invoke as a pre-build task. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently the build does not copy any chrome-driver files from the copy_cache (and I don't think it's feasible to have it setup that way due to to possible incompatibilities during multi-platform builds). However replacing the clientportal.gw
files in the copy_cache
may be of use, so I guess adding a comment explaining how it is copied during builds may be a good idea. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right! Dang, I missed it in my review of the new Dockerfile, thanks for pointing it out. Where does the chromedriver come from then? Or does IBeam not use it anymore?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DEBIAN_FRONTEND=noninteractive apt-get install -y default-jre dbus-x11 xfonts-base xfonts-100dpi
xfonts-75dpi xfonts-cyrillic xfonts-scalable xorg xvfb gtk2-engines-pixbuf nano curl iputils-ping
chromium chromium-driver build-essential && \
It is installed using the apt-get
package manager on line 28.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right thanks! And don't we need to point to it with env variables? I can see this in the doc for WebDriver
:
executable_path - path to the executable. If the default is used it assumes the executable is in the $PATH
I guess we'd need to remove driver_path
from this instantiation then so that it goes with default:
driver = webdriver.Chrome(driver_path, options=options)
If you follow that we could deprecate driver_path
in all functions and logic stack that call new_chrome_driver
, as well as deprecate the CHROME_DRIVER_PATH = os.environ.get('IBEAM_CHROME_DRIVER_PATH')
Right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you run the following, you can see it's available in the $PATH because you can call it directly without needing to specify the absolute path.
docker run --rm voyz/ibeam:<tag> which chromedriver
EDIT: Make sure you use the tag for the build.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I'm following correctly, then yes, it should be possible to deprecate those variables because calling chromedriver
directly from bash
inside the image runs the chromium-driver
installed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, that's fantastic, this will simplify the code even further. Great job putting it in 👏 Would you like to remove the driver_path
from code and the CHROME_DRIVER_PATH
env var? I'm equally well happy to do it myself after we merge this in as I know it may not appear straightforward to you at a glance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's probably best that you handle this removal, I'm only really familiar with the docker build procedures. I would recommend giving the image a test after doing so. Given that multi-platform builds work and all the dependencies get installed correctly, if it works on amd64
, it should also work the same on arm64
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, great 👍 I'll take care of it and test it again later. Nice one 👏
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great changes @maksimstojkovic 👍 I think we've got this one pretty much nailed down, happy to go ahead with merging this in 👏
Just a future note - I noticed you've added the docker-compose.yml
instructions to README, which we'll also need to add into the Wiki page once v0.4.0
is released.
Thanks for your hard work Maksim and appreciate all the patience in getting this PR reviewed 👍
My pleasure 😊 |