Tim Retout
https://retout.co.uk/
Recent content on Tim RetoutHugo -- gohugo.ioen-gbMon, 29 Apr 2024 22:02:25 +0100seL4 Microkit Tutorial
https://retout.co.uk/2024/04/29/sel4-microkit-tutorial/
Mon, 29 Apr 2024 22:02:25 +0100https://retout.co.uk/2024/04/29/sel4-microkit-tutorial/<p>Recently I revisited <a href="https://retout.co.uk/2020/05/11/sel4-on-rpi3-aarch64/">my previous
interest</a> in
<a href="https://sel4.systems/">seL4</a> - a fast, highly assured operating
system microkernel for building secure systems.</p>
<p>The <a href="https://trustworthy.systems/projects/microkit/tutorial/">seL4 Microkit
tutorial</a>
uses a simple Wordle game example to teach the basics of seL4 Microkit
(formerly known as the seL4 Core Platform), which is a framework for
creating static embedded systems on top of the seL4 microkernel.
Microkit is also at the core of <a href="https://lionsos.org/">LionsOS</a>, a new
project to make seL4 accessible to a wider audience.</p>
<p>The tutorial is easy to follow, needing few prerequisites beyond a
QEMU emulator and an AArch64 cross-compiler toolchain (Microkit being
limited to 64-bit ARM systems currently). Use of an emulator makes
for a quick test-debug cycle with a couple of Makefile targets, so
time is spent focusing on walking through the Microkit concepts rather
than on tooling issues.</p>
<p>This is an unusually good learning experience, probably because of the
academic origins of the project itself. The
<a href="https://diataxis.fr/">Diátaxis</a> documentation framework would class
this as truly a “tutorial” rather than a “how-to guide” - you do
learn a lot by implementing the exercises.</p>Prevent DOM-XSS with Trusted Types - a smarter DevSecOps approach
https://retout.co.uk/2024/01/01/trusted-types/
Mon, 01 Jan 2024 12:46:22 +0000https://retout.co.uk/2024/01/01/trusted-types/<p>It can be incredibly easy for a frontend developer to accidentally
write a client-side cross-site-scripting (DOM-XSS) security issue, and
yet these are hard for security teams to detect. Vulnerability
scanners are slow, and suffer from false positives. Can smarter
collaboration between development, operations and security teams
provide a way to eliminate these problems altogether?</p>
<p>Google claims that <a href="https://www.w3.org/TR/trusted-types/">Trusted
Types</a> has all but eliminated
DOM-XSS exploits on those of their sites which have implemented
it. Let’s find out how this can work!</p>
<h2 id="dom-xss-vulnerabilities-are-easy-to-write-but-hard-for-security-teams-to-catch">DOM-XSS vulnerabilities are easy to write, but hard for security teams to catch</h2>
<p>It is very easy to accidentally introduce a client-side XSS problem.
As an example of what not to do, suppose you are setting an element’s
text to the current URL, on the client side:</p>
<pre><code>// Don't do this
para.innerHTML = location.href;
</code></pre><p>Unfortunately, an attacker can now manipulate the URL (and e.g. send
this link in a phishing email), and any HTML tags they add will be
interpreted by the user’s browser. This could potentially be used by
the attacker to send private data to a different server.</p>
<p>Detecting DOM-XSS using vulnerability scanning tools is challenging -
typically this requires crawling each page of the website and
attempting to detect problems such as the one above, but there is a
significant risk of false positives, especially as the complexity of
the logic increases.</p>
<p>There are already ways to avoid these exploits â developers
should validate untrusted input before making use of it. There are
libraries such as <a href="https://github.com/cure53/DOMPurify">DOMPurify</a>
which can help with sanitization.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<p>However, if you are part of a security team with responsibility for
preventing these issues, it can be complex to understand whether you
are at risk. Different developer teams may be using different
techniques and tools. It may be impossible for you to work closely
with every developer â so how can you know that the frontend
team have used these libraries correctly?</p>
<h2 id="trusted-types-closes-the-devsecops-feedback-loop-for-dom-xss-by-allowing-ops-and-security-to-verify-good-developer-practices">Trusted Types closes the DevSecOps feedback loop for DOM-XSS, by allowing Ops and Security to verify good Developer practices</h2>
<p>Trusted Types enforces sanitization in the browser<sup id="fnref:2"><a href="#fn:2" class="footnote-ref" role="doc-noteref">2</a></sup>, by requiring
the web developer to assign a particular kind of JavaScript object
rather than a native string to <code>.innerHTML</code> and other dangerous
properties. Provided these special <em>types</em> are created in an
appropriate way, then they can be <em>trusted</em> not to expose XSS
problems.</p>
<p>This approach will work with whichever tools the frontend developers have
chosen to use, and detection of issues can be rolled out by
infrastructure engineers without requiring frontend code changes.</p>
<h3 id="content-security-policy-allows-enforcement-of-security-policies-in-the-browser-itself">Content Security Policy allows enforcement of security policies in the browser itself</h3>
<p>Because enforcing this safer approach in the browser for all websites
would break backwards-compatibility, each website must opt-in through
Content Security Policy headers.</p>
<p><a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP">Content Security Policy</a> (CSP) is a mechanism that allows web pages
to restrict what actions a browser should execute on their page, and a
way for the site to receive reports if the policy is violated.</p>
<p><img loading="lazy" src="https://retout.co.uk/2024/01/01/trusted-types/communication.svg" type="" alt="Diagram showing a browser communicating with a web server. Content-Security-Policy headers are returned by the URL &ldquo;/&rdquo;, and the browser reports any security violations to &ldquo;/csp&rdquo;." />
<em>Figure 1: Content-Security-Policy browser communication</em></p>
<p>This is revolutionary, because it allows servers to receive feedback
in real time on errors that may be appearing in the browser’s console.</p>
<h3 id="trusted-types-can-be-rolled-out-incrementally-with-continuous-feedback">Trusted Types can be rolled out incrementally, with continuous feedback</h3>
<p><a href="https://web.dev/articles/trusted-types">Web.dev’s article on Trusted Types</a>
explains how to safely roll out the feature using the features of CSP itself:</p>
<ul>
<li>Deploy a CSP collector if you haven’t already</li>
<li>Switch on CSP reports without enforcement (via <code>Content-Security-Policy-Report-Only</code> headers)</li>
<li>Iteratively review and fix the violations</li>
<li>Switch to enforcing mode when there are a low enough rate of reports</li>
</ul>
<p>Static analysis in a continuous integration pipeline is also sensible
â you want to prevent regressions shipping in new releases
before they trigger a flood of CSP reports. This will also give you a
chance of finding any low-traffic vulnerable pages.</p>
<h2 id="smart-security-teams-will-use-techniques-like-trusted-types-to-eliminate-entire-classes-of-bugs-at-a-time">Smart security teams will use techniques like Trusted Types to eliminate entire classes of bugs at a time</h2>
<p>Rather than playing whack-a-mole with unreliable vulnerability
scanning or bug bounties, techniques such as Trusted Types are truly
in the spirit of ‘Secure by Design’ â build high quality in from
the start of the engineering process, and do this in a way which
closes the DevSecOps feedback loop between your Developer, Operations
and Security teams.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>Sanitization libraries are especially needed when the examples
become more complex, e.g. if the application must manipulate the
input. DOMPurify version 1.0.9 also added Trusted Types support, so
can still be used to help developers adopt this feature. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>Trusted Types has existed in Chrome and Edge since 2020, and
should <a href="https://www.theregister.com/2023/12/21/mozilla_decides_trusted_types_is/">soon be coming to
Firefox</a>
as well. However, it’s not necessary to wait for Firefox or Safari to
add support, because the large market share of Chrome and Edge will
let you identify and fix your site’s DOM-XSS issues, even if you do
not set enforcing mode, and users of all browsers will benefit. Even
so, it is great that Mozilla is now on board. <a href="#fnref:2" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>Nostalgia for my attention span
https://retout.co.uk/2023/12/06/nostalgia-for-my-attention-span/
Wed, 06 Dec 2023 18:38:45 +0000https://retout.co.uk/2023/12/06/nostalgia-for-my-attention-span/<p><em>This post was possibly inspired by my daughter’s homework assignment
to interview an old person about technology change. Guess who’s old
now?</em></p>
<p>Sometimes I look back at how life used to be, and remember what it was
like. There are two key nostalgia points for me: before internet, and
before smartphones.</p>
<h2 id="before-the-internet">Before the internet</h2>
<p>Before the internet, there were computers. There were always computers
in my life, they were just less fancy, with mainly text and fewer
graphics at first. These days we adapt our user interfaces to look
more like DOS and call it ‘retro’, although claiming it’s more
efficient that way. Sometimes it is. The written word is a
fundamental mode of human communication â it resonates.</p>
<p>Some of my earliest programming memories were typing lines of BASIC
code into some sort of BBC Micro, usually not successfully. Most of
my computer learning was largely theoretical, through <a href="https://usborne.com/gb/books/computer-and-coding-books">Usborne
books</a> (now
available online! Thanks internet for reducing the marginal cost of
publishing to approximately zero) but without the benefit of a
computer of my own.</p>
<p>I went for a walk this morning, with a slight bite to the air, and
bought a newspaper, like the old person I am (I’m leaning in).
Imagine that - a physical <em>paper</em> full of news. When I was an
inquisitive sixth-former, ‘<em>The</em><strong>Guardian</strong>’ cost 50 pence â
today it was £2.80. This increase far outstrips the rate of
inflation, which would bring it to around 90 pence. No, this must
surely reflect the drop in circulation of the physical edition, and
the rise in advertisement-filled web content.</p>
<h2 id="before-smartphones">Before smartphones</h2>
<p>As the internet became mainstream, I remember dial-up, and
<a href="https://en.wikipedia.org/wiki/Freeserve">Freeserve</a>, and even the
tail end of USENET, and IRC.</p>
<p>This was still a time when email was a thing you had to be at your
desk to check. Feature phones were very useful, but it was still
possible to get lost. My university halls had a shared land-line
phone on each floor.</p>
<p>And I wrote on the internet. I remember commenting on blogs, and I
think my writing was better back then; more free, less inhibited.</p>
<p>I do find myself giving less attention to news squeezed into the small
form-factor of a mobile phone. Chat messages and notification pings.
I’m not convinced this is great progress for humanity.</p>Data Diodes
https://retout.co.uk/2023/04/18/data-diodes/
Tue, 18 Apr 2023 23:05:36 +0100https://retout.co.uk/2023/04/18/data-diodes/<p>At ArgoCon today, Thomas Fricke gave a nice talk on <a href="https://colocatedeventseu2023.sched.com/event/1JoAJ">Cloud Native
Deployments in Air Gapped
Environments</a>
describing container vulnerability scanning in the German energy
sector… and since he <em>didn’t</em> mention data diodes, and since some of
my colleagues at <a href="https://oakdoor.io">Oakdoor</a>/PA Consulting make data
diodes for a living, I thought this might be interesting to write
about!</p>
<p>It’s one thing to have an air-gapped system, but eventually in order
to be useful you’re going to have to move data into it, and this is
going to need something better than just plugging a USB stick into
your critical system. <a href="https://en.wikipedia.org/wiki/Stuxnet">Just ask Iran how well this
goes.</a></p>
<p>Eight years after Stuxnet, the UK National Cyber Security Centre
published the <a href="https://www.ncsc.gov.uk/guidance/pattern-safely-importing-data">NCSC Safely Importing Data
Pattern</a> -
but I found this a bit cryptic on first reading, because it’s not
clear what type of systems the pattern applies to, and deliberately uses
technology-neutral language. Also, this was published around the same
time GDPR was being implemented, mentions “sensitive or personal
data”, and claims to be aimed at “small to medium organisations” - but
I don’t know how many small businesses implement a
<a href="https://en.wikipedia.org/wiki/Multiple_Independent_Levels_of_Security">MILS</a>
security architecture. So without picking up on the mention of “data
diode”, you can be left scratching your head about how to actually
implement the pattern.</p>
<p>One answer using Oakdoor components:</p>
<ul>
<li><a href="https://pypi.org/project/pysisl/">PySISL</a>, a Python library which you use to transform the data into a very simple format called SISL</li>
<li>an Oakdoor⢠Import Diode, which can verify the syntax of SISL in hardware, and prevent any data moving back the other way</li>
<li>then some more PySISL code to validate the semantics of SISL on the high side and reconstruct the original format</li>
</ul>
<p>The Oakdoor diodes themselves are quite interesting - they’re
electrical rather than optical like most data diodes. The other thing
I’d always wondered is how on earth you could even establish a TCP
handshake across one - the answer is, you can’t, so you use a
UDP-based protocol like TFTP for file transfer.</p>
<p>In this way, you build the transform/verify and protocol break that
the NCSC pattern requires.</p>
<p>Congratulations, you can now import your documents to your otherwise
air-gapped system without also importing malicious code, and without
risking data exfiltration.</p>
<p>Note carefully that the Safely Importing Data pattern makes no
guarantees about the <em>integrity</em> of your documents - they could be
severely modified going through this process. For the same reason, I
anticipate challenges applying this pattern to software binaries.</p>AlmaLinux and SBOMs
https://retout.co.uk/2023/02/04/almalinux-and-sboms/
Sat, 04 Feb 2023 16:37:32 +0000https://retout.co.uk/2023/02/04/almalinux-and-sboms/<p>At CentOS Connect yesterday, Jack Aboutboul and Javier Hernandez
presented a <a href="https://connect.centos.org/#alma">talk about AlmaLinux and SBOMs</a>
[<a href="https://www.youtube.com/live/GLD2Ivu0lyo?feature=share&t=2843">video</a>],
where they are exploring a novel supply-chain security effort in the
RHEL ecosystem.</p>
<p>Now, I have unfortunately ignored the Red Hat ecosystem for a long
time, so if you are in a similar position to me: CentOS <em>used</em> to
produce debranded rebuilds of RHEL; but Red Hat changed the project
round so that CentOS Stream now sits in between Fedora Rawhide and
RHEL releases, allowing the wider community to try out/contribute to
RHEL builds before their release. This is credited with making early
RHEL point releases more stable, but left a gap in the market for
debranded rebuilds of RHEL; AlmaLinux and Rocky Linux are two
distributions that aim to fill that gap.</p>
<p>Alma are generating and publishing Software Bill of Material (SBOM)
files for every package; these are becoming a requirement for all
software sold to the US federal government. What’s more, they are
sending these SBOMs to a third party (CodeNotary) who store them in
<a href="https://archive.fosdem.org/2022/schedule/event/safety_dont_trust_us_trust_the_math_behind_immudb/attachments/slides/5117/export/events/attachments/safety_dont_trust_us_trust_the_math_behind_immudb/slides/5117/FOSDEM_2022_Immudb.pdf">some sort of Merkle tree system</a>
to make it difficult for people to tamper with later. This should
theoretically allow end users of the distribution to verify the supply
chain of the packages they have installed?</p>
<p>I am currently unclear on the differences between CodeNotary/ImmuDB
vs. Sigstore/Rekor, but there’s an SBOM devroom at FOSDEM tomorrow so
maybe I’ll soon be learning that. This also makes me wonder if a
Sigstore-based approach would be more likely to be adopted by
Fedora/CentOS/RHEL, and whether someone should start a CentOS Software
Supply Chain Security SIG to figure this out, or whether such an
effort would need to live with the build system team to be properly
integrated. It would be nice to understand the supply-chain story for
CentOS and RHEL.</p>
<p>As I write this, I’m also reflecting that perhaps it would be helpful
to explain what happens next in the SBOM consumption process; i.e. can
this effort demonstrate tangible end user value, like enabling
AlmaLinux to integrate with a vendor-neutral approach to vulnerability
management? Aside from the value of being able to sell it to the US
government!</p>Git internals and SHA-1
https://retout.co.uk/2022/06/29/git-internals-and-sha1/
Wed, 29 Jun 2022 18:54:15 +0100https://retout.co.uk/2022/06/29/git-internals-and-sha1/<p>LWN reminds us that <a href="https://lwn.net/Articles/898522/">Git still uses SHA-1 by default</a>.
Commit or tag signing is not a mitigation, and to understand why you need to
know a little about Git’s internal structure.</p>
<p>Git internally looks rather like a content-addressable filesystem, with four
object types: tags, commits, trees and blobs.</p>
<p>Content-addressable means changing the <em>content</em> of an object changes the way
you <em>address</em> or reference it, and this is achieved using a cryptographic hash
function. Here is an illustration of the internal structure of an example
repository I created, containing two files (./foo.txt and ./bar/bar.txt)
committed separately, and then tagged:</p>
<p><img loading="lazy" src="https://retout.co.uk/2022/git-internals.svg" type="" alt="Graphic showing an example Git internal structure featuring tags, commits, trees and blobs, and how these relate to each other." /></p>
<p>You can see how ‘trees’ represent directories, ‘blobs’ represent files, and so
on. Git can avoid internal duplication of files or directories which remain
identical. The hash function allows very efficient lookup of each object
within git’s on-disk storage.</p>
<p>Tag and commit signatures do not directly sign the files in the repository;
that is, the input to the signature function is the content of the tag/commit
object, rather than the files themselves. This is analogous to the way that
GPG signatures actually sign a cryptographic hash of your email, and there
was a time when this too defaulted to SHA-1. An attacker who can break that
hash function can bypass the guarantees of the signature function.</p>
<p>A motivated attacker <em>might</em> be able to replace a blob, commit or tree in a git
repository using a SHA-1 collision. Replacing a blob seems easier to me than a
commit or tree, because there is no requirement that the content of the files
must conform to any particular format.</p>
<p>There is one key technical mitigation to this in Git, which is the <a href="https://github.com/cr-marcstevens/sha1collisiondetection">SHA-1DC algorithm</a>;
this aims to detect and prevent known collision attacks. However, I will have
to leave the cryptanalysis of this to the cryptographers!</p>
<p>So, is this in your threat model? Do we need to lobby GitHub for SHA-256
support? Either way, I look forward to the future operational challenge of
migrating the entire world’s git repositories across to SHA-256.</p>
Exploring StackRox
https://retout.co.uk/2022/04/26/exploring-stackrox/
Tue, 26 Apr 2022 21:07:05 +0100https://retout.co.uk/2022/04/26/exploring-stackrox/<p>At the end of March, <a href="https://www.stackrox.io/blog/open-source-stackrox-is-now-available/">the source code to StackRox was released</a>,
following the 2021 acquisition by Red Hat. StackRox is a Kubernetes security
tool which is now badged as <a href="https://cloud.redhat.com/products/kubernetes-security">Red Hat Advanced Cluster Security (RHACS)</a>,
offering features such as vulnerability management, validating cluster
configurations against CIS benchmarks, and some runtime behaviour analysis. In
fact, it’s such a diverse range of features that I have trouble getting my head
round it from the product page or even the <a href="https://docs.openshift.com/acs/3.69/welcome/index.html">documentation</a>.</p>
<p>Source code is available via the <a href="https://github.com/stackrox/">StackRox organisation on GitHub</a>,
and the most obviously interesting repositories seem to be:</p>
<ul>
<li><a href="https://github.com/stackrox/stackrox">stackrox/stackrox</a>, containing the main
application, written in Go</li>
<li><a href="https://github.com/stackrox/scanner">stackrox/scanner</a>, the vulnerability
scanner, also in Go. From a first glance at the go.mod file, it does not seem
to share much code with <a href="https://github.com/quay/clair">Clair</a>, which is
interesting.</li>
<li><a href="https://github.com/stackrox/collector">stackrox/collector</a>, the runtime
analysis component, in C++ but also with hooks into the kernel.</li>
</ul>
<p>My initial curiosity has been around the ‘collector’, to better understand what
runtime behaviour the tool can actually pick up. I was intrigued to find that
the actual kernel component is <a href="https://github.com/stackrox/falcosecurity-libs">a patched version of Falco’s kernel module/eBPF probes</a>;
a few features are disabled compared to Falco, e.g. page faults and signal events.</p>
<p>There’s a list of supported syscalls in
<a href="https://github.com/stackrox/falcosecurity-libs/blob/d7ff3666fabff6d54d5dcbae87c68bdc20d18c5f/driver/syscall_table.c">driver/syscall_table.c</a>,
which seems to have drifted slightly or be slightly behind the upstream Falco
version? In particular I note the absence of io_uring, but given RHACS is
mainly deployed on Linux 4.18 at the moment (RHEL 8) this is probably a
non-issue. (But relevant if anyone were to run it on newer kernels.)</p>
<p>That’s as far as I’ve got for now. Red Hat are making great efforts to reach
out to the community; there’s a <a href="https://cloud-native.slack.com/archives/C01TDE3GK0E">Slack channel</a>,
and <a href="https://www.youtube.com/playlist?list=PL_uKap2n977tpvunWtj05ddKywKx2CfKh">office hours recordings</a>,
and a <a href="https://www.stackrox.io/">community hub</a> to explore further. It’s
great to see new free software projects created through acquisition in this
way - I’m not sure I remember seeing a comparable example.</p>
Reflections on OSSF London 2021
https://retout.co.uk/2021/10/07/ossf-london-2021/
Thu, 07 Oct 2021 08:55:06 +0100https://retout.co.uk/2021/10/07/ossf-london-2021/<p>On Tuesday I attended the <a href="https://events.linuxfoundation.org/open-source-strategy-forum-london/">Open Source Strategy
Forum</a>
in London, which is a meeting of the <a href="https://www.finos.org/">Fintech Open Source
Foundation</a> (FinOS), part of the Linux
Foundation. (There is a <a href="https://events.linuxfoundation.org/open-source-strategy-forum-new-york/">New York
version</a>
coming up in November for those across the pond.)</p>
<p>The morning keynotes included Gabriele Columbro <a href="https://static.sched.com/hosted_files/ossf2021/b0/OSSF%20London%20-%20Keynote%20-%20State%20of%20the%20Community%20-%20Gabriele%20Columbro.pdf">introducing the
day</a>,
then Russell Green highlighting the progress FinOS has made; Liz Rice
of CNCF fame with an inspiring talk about <a href="https://static.sched.com/hosted_files/ossf2021/00/Liz%20Rice%20Contributing%20to%20Open%20Source%20for%20Business.pdf">contributing back to
upstream</a>;
an interesting conversation between Nick Cook and Jane Gavronsky about
innovations in financial regulation, and finally a presentation from
Andrew Agerbak of BCG about <a href="https://static.sched.com/hosted_files/ossf2021/55/OSSF%20FI%20-%20Agerbak%20BCG%20-%20Public%20Cloud%20KSFs%20-%2005OCT2021%20v2.pdf">how open source can help banks move to
public
cloud</a>. (I
disagreed with some of Andrew’s presentation; I would weight the
regulatory requirements more strongly, but agree with the point that
open source can help with cloud portability.)</p>
<p>In the afternoon there were two talks which stood out for me but
were very sparsely attended:</p>
<p>Rob Knight from SUSE presented <a href="https://static.sched.com/hosted_files/ossf2021/10/Containing%20the%20Chaos%20While%20Embracing%20Kubernetes%20Based%20Technology%20in%20Finance%20by%20Rob%20Knight%20-%20SUSE.pdf">a short session on
Rancher</a>,
which offers an open source, vendor-neutral way to manage Kubernetes
clusters across multiple providers. I disagree with the view that
modern Kubernetes demands smaller clusters, perhaps one per workload -
to me this reduces utilization unnecessarily as discussed in <a href="https://research.google/pubs/pub43438/">the
original Borg paper</a>. But I’m
excited about an approach to multi-cloud Kubernetes that does not rely
on a commercial relationship with a single vendor.</p>
<p>Axel Simon from Red Hat gave <a href="https://static.sched.com/hosted_files/ossf2021/5d/OSSF%20London%20-%20Policy%20compliance%20with%20sigstore.pdf">a short talk about
Sigstore</a>,
a new approach to open source supply chain security. Essentially this
brings together several other tools (tbd, rekor, fulcio) and offers a
free-to-use non-profit Lets-Encrypt-but-for-software-signatures
CA. This seems quite promising.</p>
<p>To me these talks both cut to the heart of the big strategic
challenges facing large financial services organisations today: how do
you outsource to the cloud in a portable way, and how do you keep it
all secure when you’re working at this scale?</p>
<p>Gabriele’s opening keynote included this quote:</p>
<blockquote>
<p>…the financial services industry has
pioneered things that later became
popular in open source, like efficient
messaging protocols, microservices,
and event streaming, but the
technologies were kept proprietary in
banks and only became mainstream
when other companies open sourced
them. It would be beneficial to
contribute more…</p>
<p>CTO, Investment Bank</p>
</blockquote>
<p>It feels like there’s a real appetite within financial services to
reclaim that spirit of innovation through open source that had been
ceded in recent years to big tech. I came away from the day excited
about the future of open source in financial services.</p>
<p>(Shout out to James McLeod of FinOS for inviting me along!)</p>
GCP - Planning for the Worst
https://retout.co.uk/2021/10/04/gcp-planning-for-the-worst/
Mon, 04 Oct 2021 10:41:28 +0100https://retout.co.uk/2021/10/04/gcp-planning-for-the-worst/<p>Last month, Google Cloud published <a href="https://services.google.com/fh/files/misc/planning-for-the-worst-whitepaper.pdf">Planning for the Worst:
Reliability, Resilience, Exit and Stressed Exit in Financial
Services</a>.
This happens to be a topic I have previously worked on, so I was very
interested to hear the perspective that GCP would bring.</p>
<p>The wider industry context here is that regulators are very interested
in potential risks to the financial system arising from the wholesale
migration to cloud computing; in March 2021 the Prudential Regulation
Authority in the UK published two supervisory statements closely
related to the topic, including <a href="https://www.bankofengland.co.uk/prudential-regulation/publication/2021/march/outsourcing-and-third-party-risk-management-ss">Outsourcing and third party risk
management</a>,
which introduces the concept of a “stressed exit”. That is, if a
Cloud Service Provider were to become insolvent, suffer a catastrophic
technical failure, or (perhaps more likely) get banned from doing
business in a particular geographical region… as a bank, what would
you do if you have outsourced all your computing services to that provider?</p>
<p>I would rate the Google Cloud paper as “average” - it treads familiar
ground and provides very little insight that could not be gathered
from an afternoon of reading the regulations (which, however, are
helpfully listed in the introduction). The <a href="https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWBkvx">equivalent Azure
paper</a>
contains much more detailed advice on <em>how</em> to develop an exit plan
and was published more than a year earlier. Still, there are a few
interesting nuggets here:</p>
<p>Firstly, Google lists its commitment to Open Source as an advantage for exit
planning; many of its products and services are available in open
source versions. Two examples which come to mind are Kubernetes and
Tensorflow; here GCP has adopted a strategy of creating new software
categories that the other CSPs have embraced, which does make it
easier to avoid vendor lock-in. Dataflow is available as Apache Beam.
DataProc is really a managed Hadoop.</p>
<p>However, an obvious counterexample is listed slightly further down:
BigQuery’s capabilities are substantially unique (I predict migrating
large workloads to AWS Athena may be challenging, for example). Is
there a globally-consistent competitor to Cloud Spanner yet?</p>
<p>The exit planning benefit of GCP’s open source commitments therefore
largely depends on whether the workloads have been designed with exit
requirements in mind.</p>
<p>Secondly, common standards for hosting applications in virtual machines or
containers are also listed as an advantage; the benefit of these
similarly depends on whether the workload is built against these
interfaces.</p>
<p>Finally, Anthos (GCP’s multi-cloud management tool) is mentioned under
both exit planning and stressed exit planning. It is true that this
facilitates management of Kubernetes clusters in other clouds or
on-premise; but if you are unexpectedly ending your commercial
relationship with Google, how easy is it going to be to migrate those
clusters to an alternative “single pane of glass”? If you have
configured <a href="https://cloud.google.com/anthos/clusters/docs/aws/how-to/workload-identity-gcp">workload identity for your GKE-on-AWS clusters</a>, and GCP
disappears, how much trouble are you in? I have had a brief look, but
have not yet found a white paper on this!</p>
<p>When discussing critically-important financial workloads which may
well be essential to the stability of the entire financial system, I
think the question asked by the regulators is legitimate: what new
risks are you introducing when you add a technology outsourcing
relationship to the mix? If this relationship suddenly breaks down,
what happens?</p>
<p>But this train of thought needs to be taken to the logical conclusion:
if one of the hyperscale cloud providers genuinely did collapse, or
even if a bank just decided to end a commercial relationship, most of
the exit plans in existence today would fail. When developing the
plans, each workload is considered individually, and assumes the
luxury of several months to execute rapid custom development of
infrastructure on a new cloud. In reality, <em>all workloads in the bank
would be affected at once</em> - there would be a massive shortage of
cloud engineers within the organisation (and possibly across the
industry depending on the scenario), and these would be some of the
most rushed and risky projects of all time. This is <em>concentration
risk</em> that is touched on only briefly at the end of the paper.</p>
<p>And in this regard the paper does hint at the right answers - the most
critical types of workload need to be built against industry standard
interfaces (e.g. containers are essentially an extension of the stable
Linux kernel-userspace API boundary); avoid over-reliance on
CSP-specific services such as BigQuery; use open source that you can
run elsewhere (e.g. Kubernetes, complex as it is, is at least provided
by multiple vendors) and replicate data across multiple suppliers to
enable sufficiently swift recovery from disasters.</p>
<p>These engineering constraints are necessary because society cannot
afford for these services to be locked-in to a single vendor; not for
mere anti-competition reasons, but to ensure continuity of service.</p>
Maglev Load Balancers
https://retout.co.uk/2021/09/28/maglev-load-balancers/
Tue, 28 Sep 2021 20:11:01 +0100https://retout.co.uk/2021/09/28/maglev-load-balancers/<p><a href="https://research.google/pubs/pub44824/">Maglev</a> is the codename of
Googleâs Layer 4 network load balancer, which is referred to in GCP as
<a href="https://cloud.google.com/load-balancing/docs/choosing-load-balancer">External TCP/UDP Network Load
Balancing</a>. I
read the 2016 Maglev paper to better understand various implementation
details of Maglev with an emphasis on security (in particular as
affects availability).</p>
<p>Maglev uses a scale-out approach, implemented within clusters built
from commodity hardware achieving <em>n+1 redundancy</em>, providing greater
tolerance to failure compared with traditional hardware load balancers
deployed in pairs (only <em>1+1 redundancy</em>). The collection of Maglev machines are in an <em>active-active</em> setup, with
the router balancing across them via <a href="https://en.wikipedia.org/wiki/Equal-cost_multi-path_routing">Equal Cost Multipath (ECMP) routing</a>. This permits greater hardware utilization compared to an
<em>active-passive</em> approach.</p>
<p>The paper discusses in detail the various techniques used to increase
performance (userspace networking to avoid kernel switching, use of
5-tuple hashing to avoid needing to share data across threads, and
pinning the threads to dedicated CPUs).</p>
<p>A local connection tracking table is used to ensure packets in the
same connection are routed to the same backend endpoint
consistently. However, two failure cases are discussed: during
upgrades/failures the set of Maglev machines might change, or the
connection table might simply run out of space.</p>
<p>The impact of these cases are reduced using Maglev hashing, a novel
approach to connection tracking with consistent hashing that optimizes
for balancing the load rather than guaranteeing connections will not
shift to a different backend endpoint. The paper includes results from
experiments to show that the typical impact of failure cases is low.</p>
<p>This was an interesting read, giving an insight into how modern
highly-available load balancing works. The approach from this paper
has subsequently been adopted by other implementations (e.g. is used
in the open source project Cilium which is part of <a href="https://cloud.google.com/blog/products/containers-kubernetes/bringing-ebpf-and-cilium-to-google-kubernetes-engine">GKE Dataplane v2</a>).</p>
Google Workspace Super Admins
https://retout.co.uk/2021/09/19/google-workspace-super-admins/
Sun, 19 Sep 2021 16:44:36 +0100https://retout.co.uk/2021/09/19/google-workspace-super-admins/<p>I recently had cause to remind myself of <a href="https://support.google.com/a/answer/9011373?hl=en">Google Workspace administrator account best practices</a>. Briefly:</p>
<ol>
<li>
<p>Set up separate admin accounts, e.g. <code>[email protected]</code> to
exist side-by-side with <code>[email protected]</code>. Keep accounts
individually identifiable, and ideally ensure there are multiple
Super Admins in your organization.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
</li>
<li>
<p>Avoid using <code>[email protected]</code> for day-to-day use.</p>
</li>
<li>
<p>One of these Super Admin accounts must be set as the primary
account contact, but (due to the previous point) you’re unlikely to
be checking the emails very often. Set up a “Secondary email” for
the organization to receive alerts and updates.</p>
</li>
<li>
<p>Enrol the admin account in Advanced Protection, which enforces 2SV
with two physical security keys. Avoid losing the keys.</p>
</li>
</ol>
<p>Interestingly the Super Admin will then have a personal email address
and a personal phone linked to the account - I guess there’s some risk
that those could be used as a vector for taking over the account, but
presumably Advanced Protection makes this more challenging.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>There is <a href="https://cloud.google.com/resource-manager/docs/super-admin-best-practices#create_a_super_admin_email_address">GCP guidance</a> on this topic which contradicts the idea of keeping Super Admin accounts individually identifiable - i.e. “not specific to a particular user”. I suspect it’s outdated and have sent feedback on that page. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
Go Trie Benchmarks
https://retout.co.uk/2021/01/10/go-trie-benchmarks/
Sun, 10 Jan 2021 22:02:15 +0000https://retout.co.uk/2021/01/10/go-trie-benchmarks/<p>After <a href="https://retout.co.uk/2020/12/31/golang-trie/">writing a trie</a> I wanted to better understand
its performance, so I <a href="https://github.com/timretout/go-trie-benchmarks">wrote some benchmarks</a>
against various other Go implementations for storing UK postcodes.</p>
<p>At some point since the new year I entirely replaced the implementation from my
last post with one that more closely matches the “pure” trie described at the
start of TAOCP 6.3; i.e. a table of nodes, consisting of a list of entries,
where each node entry can be either a link to another node, or a key (that is,
an entire string stored in the trie).</p>
<p>Perhaps my chosen use case is unusual (postcodes take up less than a 64-bit
pointer!) but I was pleased with the performance; admittedly the current
implementation achieves this by reducing the alphabet size, but I think this
could generalize fairly easy with the alphabet passed in at Trie construction
time. There is room for optimization of the entry lookup, which currently just
indexes into the alphabet string; a hash function would be faster, but I am
unsure whether I want to think about hash collisions.</p>
<p>The biggest weakness of these benchmarks so far is that they do not look at
prefix search - only testing whether a string exists in the trie. I did
benchmark my trie against a plain Go map, and the trie manages to achieve faster
access if you are looking for keys in insertion order, which I was quite
impressed with - I believe this will be because the right nodes will be in the
L1/L2 caches, whereas a map cannot make use of this. Random access is of course
much faster with a map.</p>
<p>I enjoy this type of algorithm exploration, though I don’t have a serious use
for a trie in the short term. Maybe something will come up!</p>
Golang Trie
https://retout.co.uk/2020/12/31/golang-trie/
Thu, 31 Dec 2020 09:47:47 +0000https://retout.co.uk/2020/12/31/golang-trie/<p>A <a href="https://en.wikipedia.org/wiki/Trie">trie</a> (pronounced either “tree” or “try”)
is a data structure typically used to store a set of strings in a way that
allows looking up by prefix efficiently - i.e. unlike a hashmap where the keys
are randomly ordered - this makes it a reasonable choice for an autocompletion
system. A possible advantage over binary trees is that the keys are not stored
in full in each node - so if you have a large number of strings which often have
overlapping prefixes (e.g. “cat”, “cats”, “catastrophe”) then you may be able to
save memory.</p>
<p>I looked around the internet for examples of tries written in Golang, but many
make use of Go “maps” within the nodes; maps are a complex data structure, and
this feels like cheating! More precisely, it is harder to understand the
size/speed of the map and whether this outweighs the space savings of using a
trie in the first place. Similarly with Go slices - <a href="https://blog.golang.org/slices-intro">slices require a pointer and two ints</a>,
so while using them might be idiomatic, each one costs 16 extra bytes
(on 64 bit systems), which seems like it might not be necessary.</p>
<p>Therefore, I wrote <a href="https://github.com/timretout/trie">my own Golang trie</a> to
understand more of the details.</p>
<h3 id="de-la-briandais-tries">De la Briandais tries</h3>
<p>Grabbing my copy of TAOCP vol 3 from the dusty shelf, I looked up tries and
implemented the linked-list version, which apparently is known as a “de la
Briandais” trie.</p>
<p>I made an assumption that we would break strings into UTF-8 runes. My original
node definition was as follows:</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-golang" data-lang="golang"><span style="color:#66d9ef">type</span> <span style="color:#a6e22e">node</span> <span style="color:#66d9ef">struct</span> {
<span style="color:#a6e22e">next</span> <span style="color:#f92672">*</span><span style="color:#a6e22e">node</span> <span style="color:#75715e">// 8 bytes (on 64-bit systems)
</span><span style="color:#75715e"></span> <span style="color:#a6e22e">children</span> <span style="color:#f92672">*</span><span style="color:#a6e22e">node</span> <span style="color:#75715e">// 8 bytes
</span><span style="color:#75715e"></span> <span style="color:#a6e22e">character</span> <span style="color:#66d9ef">rune</span> <span style="color:#75715e">// 4 bytes
</span><span style="color:#75715e"></span>}
</code></pre></div><p>Initially I used a separate node with a sentinel value <code>rune(-1)</code> to indicate
that a node ended a key, but later I added a “value” field and used the zero
value. This allowed mapping keys to integers, but means that you cannot store
the zero value against a key - a boolean field can be added to avoid this
limitation.</p>
<h3 id="lookup-performance">Lookup performance</h3>
<p>I made no attempt to keep the linked-list in any particular order.</p>
<p>If there are a large number of keys starting with the same prefix (e.g. if we
are storing random binary bytes, or a large number of Chinese words with an
“alphabet” of around 3000 characters) then the lookup may need to scan through a
large number of nodes and this may not be an effective data structure. However
for small alphabets like English text or UK postcodes, there can be only a
relatively small number of nodes at each level.</p>
<p>My arbitrary benchmarks on random alphanumeric strings suggest this trie
outperforms some of the competitors in both lookup time and memory usage, but it
is worth testing on your data set to know for sure.</p>
<h3 id="memory-usage">Memory usage</h3>
<p>When benchmarking, I was surprised to find that each node seemed to allocate 32
bytes of memory, rather than 20 bytes. I ruled out <a href="https://dave.cheney.net/2015/10/09/padding-is-hard">struct alignment issues</a>, and realised
instead that <a href="https://medium.com/a-journey-with-go/go-memory-management-and-allocation-a7396d430f44">Golang allocates small objects in fixed size classes</a>,
so the memory usage gets rounded up to a power of two.</p>
<p>I think it might be possible to reduce this by storing all nodes in a single
slice (which would be 8 byte aligned, so each node would use 24 bytes), but I
have not yet tried this.</p>
<p>Expanding strings to runes means that each character takes four bytes - for
English text, often each character requires only one byte when stored within a
string (ignoring the 16 byte string header). It is quite possible that a map or
binary tree or another data structure will use less memory than this trie on
your data set (but changing the node definition to use bytes would have no
effect due to the allocation issue above).</p>
<h3 id="conclusion">Conclusion</h3>
<p>Tries are a relatively basic data structure, but many implementations found in
the wild have unexpected properties when you look under the hood! Writing my
own trie has already taught me more about how Go handles memory, and I may
experiment some more with different implementations to explore the trade-offs.</p>
<p>Assume nothing - always benchmark, and ensure your choice of data structure is
appropriate for your specific use case.</p>Focusing on Ingenuity
https://retout.co.uk/2020/12/28/focusing-on-ingenuity/
Mon, 28 Dec 2020 08:13:29 +0000https://retout.co.uk/2020/12/28/focusing-on-ingenuity/<p>I am led to believe that New Year’s Resolutions seldom work, and that a more
effective approach to goal setting is to choose a <a href="https://gretchenrubin.com/2015/12/new-years-theme-2016">theme for the year</a>. This makes sense to me, as in hindisght 2020 threw up a few surprises.</p>
<p>Last January, buoyed by my success in gaining various technology-related
certifications, I <a href="https://retout.co.uk/2020/01/02/2020/">wrote</a> that I would focus on learning; at
that time I intended to engage in more formal study, but subsequently
rediscovered how little I enjoy essay writing! However, in the end I did learn
a good deal this year, so perhaps I achieved my aim after all.</p>
<p>This year I am taking “ingenuity” as a theme; I want to recapture that joy in
creation. I heard another idea recently - that we are made in the image of God,
and therefore <em>we love to create</em>. I love this, as it legitimizes that feeling
of play for me - and I think matches the best of the original hacker ethos. I
am reevaluating my actions and commitments through this lens, and looking to
spend more time being ingenious (or trying to be)!</p>
Bin Calendar
https://retout.co.uk/2020/12/28/bin-calendar/
Mon, 28 Dec 2020 00:24:37 +0000https://retout.co.uk/2020/12/28/bin-calendar/<p>Around this time each year it is especially useful to know when the rubbish is
due to be collected by the local council, since the schedule is inevitably
disrupted by the holidays until well into January. In fact where I live we have
fortnightly collections, with different types of bin collected on alternate
weeks, so I never find it easy to remember which bin is due to be put out.</p>
<p>This time last year I wrote a <a href="https://twitter.com/BrickhillBins">bin collections Twitter
bot</a>, mainly while exploring the Twitter API -
but twelve months later, I have grown to accept that Twitter is not the most
appropriate user experience for discovering important notifications.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup></p>
<p>The correct user experience for looking up scheduled events is of course a
<em>calendar</em>. The council do publish <a href="https://bbcdevwebfiles.blob.core.windows.net/webfiles/Bin%20Collection%20Calendars/collection-calendar-A.pdf">a PDF
calendar</a>,
but it requires some deciphering, as they choose to combine everyone’s calendar
on one page and then let you look up the bin colour like an old train timetable,
given that you know your collection day. Fortunately they have also created a
web service which tells you the exact collections due in the next few weeks,
tied (I believe) directly into their backend systems. This API is used by the
main web page which allows everyone to check their bin day, and so it was even
updated correctly when these were disrupted early in the pandemic.</p>
<p>I have adapted my Twitter bot code (which reads this API) to instead publish an
ics file; I have then subscribed to this in my favoured calendaring application,
and enabled notifications at 5pm on the day before the bins are due. This
should work.</p>
<p>A word on the implementation (since I apparently said nothing on my blog about
this a year ago): I wrote it as a Google Cloud Function in Golang, triggered by
Cloud Scheduler via Pub/Sub. The ics file is output to a Cloud Storage bucket.
This requires only a few seconds of CPU time per day, and fits well within the
smallest 128MB RAM quota - so is extremely inexpensive, with no significant attack
surface to worry about.</p>
<p>Although in an ideal world the council would publish and maintain these
calendars themselves, I find some joy in creating hacks like this.</p>
<section class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1" role="doc-endnote">
<p>For one thing, I hardly ever look at Twitter now - in fact my bin bot is
probably my most frequented Twitter feed - but even when I used Twitter more
frequently, it was easy to lose important posts in a sea of news, jokes and
arguments. I adopted a policy of following only people I knew in real life,
which altered the feel of the network quite substantially; and of course
ensuring a chronological feed rather than relying on the algorithm. But it
didn’t make it feel <em>right</em>, and I’m happy to have social media fade out of my
life. But I digress. <a href="#fnref:1" class="footnote-backref" role="doc-backlink">↩︎</a></p>
</li>
</ol>
</section>
seL4 on Raspberry Pi 3 in AArch64 mode
https://retout.co.uk/2020/05/11/sel4-on-rpi3-aarch64/
Mon, 11 May 2020 23:06:48 +0100https://retout.co.uk/2020/05/11/sel4-on-rpi3-aarch64/<p><a href="https://research.csiro.au/tsblog/sel4-raspberry-pi-3/">Several references
exist</a> that
document how to <a href="https://docs.sel4.systems/Hardware/Rpi3.html">run seL4 on the Raspberry Pi 3 in 32-bit
mode</a>. One annoying
paper cut encountered when getting this working is the need for a
custom u-boot - either a binary distributed by the authors of seL4, or
reverting a particular commit in u-boot.</p>
<p>A recent release of seL4 mentioned AArch64 support for RPi3. I’ve got
it running, and it appears to avoid the need for a custom
u-boot. (N.B. you still need a GPIO serial cable!)</p>
<p>I started with a stock Raspbian Lite image to get a known-good
filesystem (with start.elf and bootcode.bin), then added u-boot and
seL4 on top.</p>
<p>After installing the appropriate toolchain, seL4test can be configured
and built almost as normal:</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-shell" data-lang="shell">mkdir sel4test <span style="color:#f92672">&&</span> cd sel4test
repo init -u https://github.com/seL4/sel4test-manifest.git
repo sync
mkdir build-rpi3 <span style="color:#f92672">&&</span> cd build-rpi3
<span style="color:#75715e"># Note AARCH64 here</span>
./init-build.sh -DPLATFORM<span style="color:#f92672">=</span>rpi3 -DAARCH64<span style="color:#f92672">=</span><span style="color:#ae81ff">1</span>
ninja
</code></pre></div><p>Copy <code>build-rpi3/images/sel4test-driver-image-arm-bcm2837</code> to the SD
card.</p>
<p>U-Boot was compiled approximately as per <a href="https://a-delacruz.github.io/ubuntu/rpi3-setup-64bit-uboot.html">https://a-delacruz.github.io/ubuntu/rpi3-setup-64bit-uboot.html</a> - I chose to use the latest stable release tag, <code>v2020.04</code>.</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-shell" data-lang="shell">make CROSS_COMPILE<span style="color:#f92672">=</span>aarch64-linux-gnu- rpi_3_defconfig
make -j -s CROSS_COMPILE<span style="color:#f92672">=</span>aarch64-linux-gnu-
</code></pre></div><p>Copy <code>u-boot.bin</code> to the SD card.</p>
<p>Edit <code>config.txt</code> to add:</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-ini" data-lang="ini"><span style="color:#a6e22e">enable_uart</span><span style="color:#f92672">=</span><span style="color:#e6db74">1</span>
<span style="color:#a6e22e">kernel</span><span style="color:#f92672">=</span><span style="color:#e6db74">u-boot.bin</span>
<span style="color:#75715e"># Enable 64 bit mode</span>
<span style="color:#a6e22e">arm_control</span><span style="color:#f92672">=</span><span style="color:#e6db74">0x200</span>
</code></pre></div><p>Additionally I add a <code>boot.scr</code> file that automates loading and
booting the image. First create <code>boot.txt</code> with the commands from <a href="https://docs.sel4.systems/Hardware/Rpi3.html">the seL4 RPi3 support page</a>:</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-shell" data-lang="shell">fatload mmc <span style="color:#ae81ff">0</span> 0x10000000 sel4test-driver-image-arm-bcm2837
bootelf 0x10000000
</code></pre></div><p>Then use mkimage from the u-boot source tree:</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-shell" data-lang="shell">mkimage -A arm -O linux -T script -C none -n boot.scr -d boot.txt boot.scr
</code></pre></div><p>Copy <code>boot.scr</code> to the SD card.</p>
<p>If all is well, your serial output should show the test suite running
to a reassuring conclusion:</p>
<div class="highlight"><pre style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-shell" data-lang="shell">U-Boot 2020.04 <span style="color:#f92672">(</span>May <span style="color:#ae81ff">11</span> <span style="color:#ae81ff">2020</span> - 22:37:52 +0100<span style="color:#f92672">)</span>
DRAM: <span style="color:#ae81ff">948</span> MiB
RPI <span style="color:#ae81ff">3</span> Model B <span style="color:#f92672">(</span>0xa02082<span style="color:#f92672">)</span>
MMC: mmc@7e202000: 0, sdhci@7e300000: <span style="color:#ae81ff">1</span>
Loading Environment from FAT... *** Warning - bad CRC, using default environment
In: serial
Out: vidconsole
Err: vidconsole
Net: No ethernet found.
starting USB...
Bus usb@7e980000: scanning bus usb@7e980000 <span style="color:#66d9ef">for</span> devices... <span style="color:#ae81ff">3</span> USB Device<span style="color:#f92672">(</span>s<span style="color:#f92672">)</span> found
scanning usb <span style="color:#66d9ef">for</span> storage devices... <span style="color:#ae81ff">0</span> Storage Device<span style="color:#f92672">(</span>s<span style="color:#f92672">)</span> found
Hit any key to stop autoboot: <span style="color:#ae81ff">0</span>
switch to partitions <span style="color:#75715e">#0, OK </span>
mmc0 is current device
Scanning mmc 0:1...
Found U-Boot script /boot.scr
<span style="color:#ae81ff">151</span> bytes read in <span style="color:#ae81ff">5</span> ms <span style="color:#f92672">(</span>29.3 KiB/s<span style="color:#f92672">)</span>
<span style="color:#75715e">## Executing script at 02400000 </span>
<span style="color:#ae81ff">3199560</span> bytes read in <span style="color:#ae81ff">141</span> ms <span style="color:#f92672">(</span>21.6 MiB/s<span style="color:#f92672">)</span>
<span style="color:#75715e">## Starting application at 0x00901000 ... </span>
ELF-loader started on CPU: ARM Ltd. Cortex-A53 r0p4
paddr<span style="color:#f92672">=[</span>901000..c090f7<span style="color:#f92672">]</span>
No DTB passed in from boot loader.
Looking <span style="color:#66d9ef">for</span> DTB in CPIO archive...found at a235c0.
Loaded DTB from a235c0.
paddr<span style="color:#f92672">=[</span>238000..23bfff<span style="color:#f92672">]</span>
ELF-loading image <span style="color:#e6db74">'kernel'</span>
paddr<span style="color:#f92672">=[</span>0..237fff<span style="color:#f92672">]</span>
vaddr<span style="color:#f92672">=[</span>ffffff8000000000..ffffff8000237fff<span style="color:#f92672">]</span>
virt_entry<span style="color:#f92672">=</span>ffffff8000000000
ELF-loading image <span style="color:#e6db74">'sel4test-driver'</span>
paddr<span style="color:#f92672">=[</span>23c000..502fff<span style="color:#f92672">]</span>
vaddr<span style="color:#f92672">=[</span>400000..6c6fff<span style="color:#f92672">]</span>
virt_entry<span style="color:#f92672">=</span>40ecf8
Enabling MMU and paging
Jumping to kernel-image entry point...
<span style="color:#f92672">[</span>...snip test output...<span style="color:#f92672">]</span>
Test suite passed. <span style="color:#ae81ff">128</span> tests passed. <span style="color:#ae81ff">41</span> tests disabled.
All is well in the universe
</code></pre></div>Lockdown
https://retout.co.uk/2020/05/06/lockdown/
Wed, 06 May 2020 23:23:44 +0100https://retout.co.uk/2020/05/06/lockdown/<p>It seems right that I should mention the pandemic on my personal blog;
I doubt very much that I will say anything original or interesting,
but it would seem strange to look back on my writings from 2020 and
see nothing about COVID-19.</p>
<p>I had to check this, but I have been staying at home since Monday 15th
March, save for occasional walks around the park. (UK schools closed
on 20th March, and we officially announced the lockdown on 23rd March,
but our house started a bit early due to having a mild cough.) That’s
52 days at home for me so far, or 7 weeks and 3 days.</p>
<p>Yet, due to the miracle of modern technology, I have been working
full-time over this period. This has not been stress-free.</p>
<p>We do not suddenly have copious free time; but, still, there have been
some highlights. Recently I dug out a couple of Raspberry Pi boards
and tackled some challenging (for me) projects - I got seL4 to run,
and now I’m learning ARM assembler to debug it (but have concluded I
will need a JTAG interface to make progress). At the weekend I
programmed a set of red/amber/green LED traffic lights using Scratch
while my daughter ignored my feeble attempts at home education.</p>
<p>This pleasant bread-making, home-schooling, online shopping, remote
working life contrasts starkly with the reality of what has been
happening in our hospitals and care homes. This is the paradox of
lockdown - we are isolated, so don’t see the horror. The imagery of
war has been overused already; but this crisis can too easily feel
like one of those distant wars, in a far-off land that happens to look
like our local hospital.</p>
2020
https://retout.co.uk/2020/01/02/2020/
Thu, 02 Jan 2020 12:59:07 +0000https://retout.co.uk/2020/01/02/2020/<p>Life comes at you fast. Since <a href="https://retout.co.uk/2019/09/04/pa-consulting/">changing
jobs</a> three months ago, I have earned
several cloud-related certifications, and started an assignment as a
cloud security architect with a large financial services client.</p>
<p>Consulting is quite different to ordinary employment; a great deal of
emphasis is placed on making connections, and building a personal
brand. I’m enjoying the variety of work, and the opportunity to
develop my skills.</p>
<p>My focus this year is on learning; I intend to spend more time reading
and writing, especially as a way to distill and clarify my ideas. Who
knows, I might even post more on this blog?</p>
PA Consulting
https://retout.co.uk/2019/09/04/pa-consulting/
Wed, 04 Sep 2019 17:37:46 +0100https://retout.co.uk/2019/09/04/pa-consulting/<p>In early October, I will be saying goodbye to my colleagues at CV-Library
after 7.5 years, and joining PA Consulting in London as a Principal
Consultant.</p>
<p>Over the course of my time at CV-Library I have got married, had a
child, and moved from Southampton to Bedford. I am happy to have
played a part in the growth of CV-Library as a leading recruitment
brand in the UK, especially helping to make the site more reliable - I
can tell more than a few war stories.</p>
<p>Most of all I will remember the people. I still have much to learn
about management, but working with such an excellent team, the
years passed very quickly. I am grateful to everyone, and wish them
all every future success.</p>
My Free Software Activities for Jan/Feb 2019
https://retout.co.uk/2019/02/28/free-software-activities/
Thu, 28 Feb 2019 21:55:44 +0000https://retout.co.uk/2019/02/28/free-software-activities/<p>I have done a small amount of free software work! However, I’m going
to cheat and list it since the start of the year.</p>
<h2 id="social-groups">Social groups</h2>
<p>First, the fun stuff:</p>
<ul>
<li>I organised the first two meetings of the <a href="https://www.meetup.com/Bedford-Linux-User-Group/">Bedford Linux User
Group</a>. Fire
engines were observed on both occasions, but this was pure
coincidence.</li>
<li>I sent pull requests adding a fancy map to the new <a href="https://lug.org.uk/">lug.org.uk
site</a>. I need to follow up to make that
mobile-friendly…</li>
</ul>
<h2 id="apt-security">apt security</h2>
<p>I sent PRs to
<a href="https://whydoesaptnotusehttps.com">whydoesaptnotusehttps.com</a>
adjusting the summary and providing instructions on using HTTPS.</p>
<p>I’m planning to extend this with a threat model for apt.</p>
<h2 id="firefox-app-mode">Firefox app mode</h2>
<p>I wrote a patch for a bug I’ve been subscribed to for a while,
requesting an “app” mode similar to Chrome:</p>
<ul>
<li><a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1283670">https://bugzilla.mozilla.org/show_bug.cgi?id=1283670</a></li>
</ul>
<p>I don’t have much hope that this will actually get reviewed or
applied, because it conflicts with a new “earlyBlankFirstPaint”
feature. Still, I had fun writing it, and if anyone can tell me how
Mozilla development works, I’d be most grateful.</p>
<h2 id="libreoffice-bugs">Libreoffice bugs</h2>
<p>I filed a fun Writer bug:</p>
<ul>
<li><a href="https://bugs.documentfoundation.org/show_bug.cgi?id=122862">https://bugs.documentfoundation.org/show_bug.cgi?id=122862</a></li>
</ul>
<p>If you insert a tall enough image into a page header in LibreOffice
Writer, you can get it to add pages indefinitely. Impressively, the
application remains quite responsive while it tries to flow the
text forever…</p>
<p>…no idea how to fix this one though!</p>