To delete the git tags on the remote (e.g. Github or GitLab) you need to have the tags locally - as it uses the local list.
Ensure you have the tags locally by running git fetch origin
, you can then run git tag
to confirm there are tags there.
Removing the tags from remote can then be done with:
git push origin --delete $(git tag -l)
This passes the result of git tag -l
into the --delete
parameter.
To delete locally, you can run:
git tag -d $(git tag)
Read time: 1 mins
Tags:
]]>When you read about Playwright, a lot of the examples show testing static site or JavaScript powered applications and isolating components within them.
However, Playwright has many more applications beyond React-based websites and can help test monolithic or traditional LAMP-based websites (thing Wordpress or TYPO3).
I've covered before about testing the front-end of a TYPO3 project, however those tests required an accessible application with PHP and a Database accessible.
What if you wanted to test parts of your application as part of a CI without spinning up a whole server?
The general principle behind our CI tests is isolating HTML while using the applications bundled JavaScript. We decided to include the full JS file for two reasons:
The first step is to isolate the HTML from your application which is specific to the bit of JavaScript you wish to test. Although we including the full JS bundle, we only want to test specific functionality.
We then hard-code the expected HTML in our test. A second benefit to doing it this way is we have a record of what the expected HTML is. This means if the code changes via the CMS or a developer and fails in real-world, we have a record as to why.
Create a new test and use the setContent
function (setContent in the Playwright docs) on the page
object to create a page element
Tip: Browsers will add a <head>
and <body>
element if they don't exist so, unless your JavaScript explicitly requires these, you can omit them from your HTML
import { test, expect } from '@playwright/test';
test('Checks the site selector correctly updates & navigates when isolated', async({ page }) => {
// Set the HTML
await page.setContent(`<div class="alert"></div>`);
});
The next thing we do is load the JavaScript. We do this using the addScriptTag
function (addScriptTag in docs) specifically using the path
attribute.
This takes a JavaScript file and loads it into the page itself - this means the JS file doesn't need to be "accessible" on a URL and helps keep the test contained
import { test, expect } from '@playwright/test';
test('Checks the site selector correctly updates & navigates when isolated', async({ page }) => {
// Set the HTML
await page.setContent(`<div class="alert"></div>`);
// Load the JS
await page.addScriptTag({
path: 'app/sites/site_package/Resources/Public/JavaScript/core.js',
});
await expect(page.locator('.alert')).toHaveClass('alert-dismissed');
});
The path
is relative to your playwright.config.ts
(generally the root of your project)
From there you can run the normal expect()
function to test your JS
Our convention is to group similar tests with a test.describe
with one that tests isolated HTML like the above and the second testing on the website itself.
The isolated test has an additional tag of @ci
- this allows us to run only the tagged tests in our pipeline with the following:
npx playwright test --grep @ci
Our two tests would look something like this:
import { test, expect } from '@playwright/test';
test.describe('Alert test', () => {
test('Test in isolation isolated', { tag: ['@ci'] }, async({ page }) => {
// Set the HTML
await page.setContent(`<div class="alert"></div>`);
// Load the JS
await page.addScriptTag({
path: 'app/sites/site_package/Resources/Public/JavaScript/core.js',
});
await expect(page.locator('.alert')).toHaveClass('alert-dismissed');
});
test('Test on the site', async({ page }) => {
await page.goto('https://www.mikestreety.co.uk/');
await expect(page.locator('.alert')).toHaveClass('alert-dismissed');
});
});
If the tests on the site & isolation were exactly the same, we could extract to a function and run it in both instances.
Read time: 4 mins
Tags:
]]>Using Renovate to update your dependencies is a great way of automating upgrades. However, using the Docker image can quickly fill up your CI server or machine.
With their rapid release schedule, a day can see several new versions appearing. We have Renovate running every 2 hours which, as Renovate updates itself, could see 6 new Docker images downloaded a day (Renovate make the version upgrade one run and then merges it the next).
As we have NPM, Composer, Docker and Gitlab CI dependencies updated by Renovate, we find outselves using the -full
image which, uncompressed, is over 6GB.
Becuase of that, we now have the following command running weekly:
docker rmi `docker image ls | egrep "^renovate/" | awk '{print$3}'`
This finds all the images that start with renovate and deletes them. When Renovate next runs, it will pull down the image it needs.
Read time: 1 mins
Tags:
]]>I don't set up Xdebug regularly enough to remember all the steps and processes in place. These steps are documented really well in the DDEV Documentation however with those needing to cater to the many, I sometimes get waylaid or confused finding the steps for me.
Before starting, make sure you have installed the PHP Debug VS Code extension.
code .vscode/launch.json .vscode/tasks.json
launch.json
file, paste in the contents of the launch.json file (see below)tasks.json
file, paste in the contents of the tasks.json file (see below)Using the tasks.json
this should start xdebug in the ddev container (ddev xdebug on
) and you should be able to start debugging.
For any further configuration or documentation, check out the DDEV docs.
File contents copied here for ease/speed
launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Listen for Xdebug",
"type": "php",
"request": "launch",
"hostname": "0.0.0.0",
"port": 9003,
"pathMappings": {
"/var/www/html": "${workspaceFolder}"
},
"preLaunchTask": "DDEV: Enable Xdebug",
"postDebugTask": "DDEV: Disable Xdebug"
}
]
}
tasks.json
{
"version": "2.0.0",
"tasks": [
{
"label": "DDEV: Enable Xdebug",
"type": "shell",
"command": "ddev xdebug on"
},
{
"label": "DDEV: Disable Xdebug",
"type": "shell",
"command": "ddev xdebug off"
}
]
}
Read time: 2 mins
Tags:
]]>Despite happily using Minio to store my runner caches for a few years now, I've been looking for a way to store my global Gitlab CI runner caches on the local filesystem.
My reasons for this are twofold, one being infrastructure cost, meaning we only need to pay for and maintain one VPS (as opposed to one for MinIO and one for Gitlab CI) and the other being speed - this is just a hunch but storing caches locally is probably quicker than uploading and downloading to a different server.
I did consider using AWS for my Gitlab runners & runner cache, however the big unknown is the cost. I have no clue as to how much my runners and storage would be and you hear so many horror stories I have steered clear.
Instead, I have a VPS on Hetzner which costs:
I decided to include an extra mounted volume to store the cache to allow a bit more flexibility and isolate the caching.
Local caching isn't really mentioned in the Gitlab docs - it is referenced but never explicitly laid out, so I had a lot of guessing as to how to do it and what goes where.
The below talks about the Docker executors and runners, but I assume it could work for the other ones two.
There are 2 caches when using a runner, the runner
cache and the runner.docker
cache. From what I can gather, the runner
cache is for the actual filesystem, while the path used in the runner.docker
section is where assets from the cache:
section of your .gitlab-ci.yml
get's stored.
I had a lot of frustration setting this up, but got there in the end - the system I cam up with in the end is
/mnt/HC_Volume_1234
(my mounted drive) - I then made a cache
folder inside however Gitlab CI likes it being /cache
.
cache
folder, I made runner
and docker
as sub-folders to help separate the caches/cache
, symlink your cache folder to be /cache
in the root of your server - ln -s /path/to/folder /cache
config.yaml
cache_dir
From the standard runner registration, these were the things I had to add/change
[[runners]]
cache_dir = "/cache/runner"
[runners.docker]
privileged = true
volumes = ["/var/run/docker.sock", "/cache:/cache"]
cache_dir = "/cache/docker"
The thing that caught me out is you need to specify "/cache:/cache"
in the volumes for the docker runner, although the Gitlab docs say you can just do "/cache"
this didn't seem to work for me.
Note the two different cache_dir
locations for the two different types.
Gitlab CI also expects/plays nicer if the folder is /cache
in the docker runner - again I tried setting this as my mounted drive (or other folders) but it just wasn't playing ball.
With that in place - and added to as many runners as you want, they can all access the same cache from your local drive.
If your drive does start to fill up, you can nuke the runner
and/or docker
cache folders - it might be worth having this on a scheduled task once a week or similar.
Read time: 3 mins
Tags:
]]>On the face of it, when reflecting back on 2024, it seemed a bit of an uneventful year. No children were born, no major world events happened and I didn't even buy a new bike.
However when I looked back through my photos and had a proper think about what happened, it turned out to be a pretty good year. Plenty of adventures with the family, I went on a plane for the first time and long time and I did buy a bike related thing.
2024 saw a couple of holidays, plenty of days out to musicals and theme parks (it helps that we're Merlin annual pass holders) and I also rediscovered going to gigs (and hope to go to a few more in 2025).
As a family we went to Peppa Pig World, Drusillas, Legoland (twice, once at Halloween and the other at Christmas), Chessington and the Sea Life Centre (Alfie got a bonus trip to Sea Life Centre twice and another to Chessington in the school holidays).
My wife and I went and saw Hamilton and Stranger Things - The First Shadow. We also took Alfie to his first show by taking him to see the Frozen musical in London.
There were only a couple of gigs I went to at the end of the year. Embrace were doing a 20th anniversary tour of their "Out of Nothing" at Chalk in Brighton. My obsession with Bastille this year was realised and I managed to get tickets to their "&" tour at the Shepherds Bush Empire, along with seeing Dan perform some songs at Resident, a small record shop in Brighton.
Holidays this year took us to Wales and to Cornwall. In Wales we visited Bluestone, which is like Center Parcs. The bonus is you get to drive golf carts to get around the site. Cornwall was a family holiday with the 4 of us. It had it's ups and downs (and British holiday mentality with beach visits when it was cloudy) nut ultimately was a great 7 days.
At the beginning of the year we had to deal with a snail infestation in our fish tank. From one, seemingly innocent snail, we ended up with hundreds (if not thousands) of snails all over the tank. We tried all different techniques but the key was a Snailcatcher paired with an assassin snail. The snail is still around (and we think it feeds on some of the shrimp), but we haven't seen another snail since.
With the start of rest days at work (more on that in a minute) I found myself with more time at home without the family, which let me get on with some DIY bits which weren't pressing. I moved and fenced off the compost bin (so I didn't have to look at it) and replaced the rotting wooden shed with a smaller, plastic one.
After endless leaks, our fibreglass kitchen roof was replaced with an asphalt one by the original builders (who had been coming back to patch the leaks). It was no fault of theirs, the material just didn't bond well, nor did it like the English South-Coast weather.
We swapped the kids bedrooms round this year. We bought Alfie a cabin bed which means he has more space in his room to play - something he noticed his friends having. We also took the kids to IKEA for the first time which was an experience!
Work was a roller-coaster in 2024 and ended wih a bit of a squeeze. There were some highlights, however, as we introduced the 9-day fortnight alongside a health cash plan for employees. The 9-day fortnight was well received, with employees getting to select a day over a 2 week period which they can have off as a "rest day". There are some rules and boundaries around it but it seems to be respected and thoroughly enjoyed.
We had a little bit of staff turnover this year (not as much as 2023, mind) with one of our senior backend developers, Zaq, leaving. It was amicable, but he is missed. Autumn saw us hire two developers in his place - a Junior and a Mid.
I did manage to get a "work trip" squeezed in over the summer. In August I flew to Germany for my first TYPO3 Developer Days conference. It was great to meet some of the TYPO3 developer community, along with having a few days away (and getting on a plane for the first time in a long time).
2024 saw a good cadence of blog posts (I judge more than one a month to be healthy) but saw only one post from 2024 make it into the top 10. That post about migrating Gitlab has been in the top spot since it's creation and I don't see it moving anytime soon.
Beer reviews, once again, took a little step down. This is slightly fuelled by health (drinking slightly less), partly by money (craft beers are expensive) as well as starting to find a groove of beers I like and drinking less new ones.
With cycling, I've finally divided out eBike rides from non-electric rides to help see where the bulk of the stats are coming from - I was surprised to see my non-eRides being higher than my eBike. Number of rides took a dip but the distance stayed steady.
I purchased a turbo trainer at the end of the year, so I am expecting my distance to be significantly higher next year - I might have to see if I can separate out the "real" miles with the Zwifting ones.
Walking took an increase as I got back into Geocaching. At the beginning of the year I managed a 17 day streak and also discovered Adventure Labs - virtual geocaches where you visit real-world places to answer questions.
A generally active year and I hope it continues.
Read time: 4 mins
Tags:
]]>Every year I run a quiz for our friends on Christmas Eve and thought I would share it. If you wish to run this quiz, you will need:
The quiz can be played in teams or individually - I'll leave it to you to work it out.
This year there does need to be a quiz master due to the music round, however you can omit this if you all want to play.
The slides are on Google, however if you need them in a different format, let me know.
This quiz is 5 rounds with 7 questions in each round.
When running the quiz I ask that phones are put away - more for politeness than fear of cheating. I also make it clear that the answers in the quiz are always right - even if they are not. This way it is fair and should hopefully avoid arguments.
If you have played the "Herd Mentality" game then you understand this round. It is about guessing what everyone else will answer with.
You will be asked to name something and you are trying to guess what the majority will write. The first slide is an example question to practice:
"What is the best chocolate bar?"
You write down what you think everyone else will (Double Decker, right?) and, when ready, all reveal your answers. If there is a majority of the same answer those people (or teams) score a point. If there is a tie, no-one gets a point.
If you need more assistance, read the Herd Mentality rules.
This is about remembering exactly what the song titles are. You are given the artist and the song is played (feel free to play it all).
The players must right the exact title (including punctuation) to get the point.
Marketed as a "Taylor Swift" round, this round is actually about all things swift - the bird, car and even the caravan.
Multiple choice, 1 point per correct answer
I got my 6 year-old to draw things from around the house - what are they?
7 questions about what happened in 2024.
Let me know if you use this quiz and how you get on - was it to easy? to hard? to complicated?
Read time: 2 mins
Tags:
]]>Monday morning I tried to upload a screenshot on my Mac Mini and it was taking a lifetime - a quick internet speedtest and my heart sank:
My first thought is my Internet Service Provider - I pay for 900mbps up and down (which I get close to with a wired connection) but my WiFi devices tend to get 300-500 Mbps. Seeing that upload speed (as a web developer) nearly brought me to tears.
Checking other devices (and phoning up the ISP), I realised that, in fact, my router was getting the full 900, along with my Android phone and my wife's Windows laptop getting the expected up and download speed. It seemed that both my personal laptop and work Mac Mini were struggling with uploads.
I went on a debugging frenzy - searching holes of Reddit I didn't really want to be. The oddness of which devices was affecting really stumped me.
While I was searching, I came across different fixes for every person which "seemed to work". If you have stumbled across this post as you too are experiencing poor connectivity, then these are the things I've tried and the things that were suggested.
This is by no means an exhaustive list, nor does it tell you how to solve it, but it is worth skim through for some pointers.
Is it WiFI?
Plug an ethernet cable into your computer and see if it is just a WiFi issue or if it is your computer interacting with the network.
A specific access point?
Have you got a mesh network? If so, can you connect to another access point to see if it is that causing the issue?
Check the network
If you can, connect to a different network (maybe a friends' or work) to see if it is the device or the network itself.
Review channels & channel width
Check the channels and channel width options for your WiFi networks - can they be optimised? (My Unifi router has the option to optimise these)
Are there other WiFi networks?
Do you have multiple WiFi networks (e.g. a guest or IoT one)? Can you turn off the ones you are not connected to?
Check the frequency band
Can you disable the 2.4Ghz or 5Ghz independently to see if they are interfering with one another?
Local interference?
Is there something which has recently been plugged in or moved near your device which could be interfering?
Mesh power mismatch
If you have multiple access points, are you connecting to the closest one or is another one much further away from you getting in the way?
Check other speed test servers
When running a speed test, does it still happen when you change servers?
Check other devices
Are other devices on your same network experiencing the same issue?
Change the router
If you can, switch the physical router for something else, along with any access points or switches along the path
Reboot everything
Your router, your WiFi access points, your computer
Update everything
Your router, your WiFi access points, your computer
Check cabling
Check all the cables going to and from your router
VPN
Are you currently connected to a VPN?
Network
Are you definitely on the right WiFi?
Disconnect and reconnect
Forget the network and reconnect
DNS
Do you have any custom DNS servers set on your device or router?
Turn off "Low data mode"
System Preferences -> Network -> WiFi -> Details (next to the WiFi name) -> Low Data Mode
Turn off "Limit IP address tracking"
System Preferences -> Network -> WiFi -> Details (next to the WiFi name) -> Limit IP address tracking
Turn off "Private WiFi address"
System Preferences -> Network -> WiFi -> Details (next to the WiFi name) -> Private WiFi address
Lower your MTU (Spoiler, this is what did it for me)
System Preferences -> Network -> WiFi -> Details (next to the WiFi name) -> Hardware
For me 1436 was the magic number, it seemed going any higher than this and the upload dropped again.
What the MTU is, I don't really understand, however there were a few blog posts that helped me work out what my MTU should be:
Read time: 3 mins
Tags:
]]>TL:DR; If presented with a NET::ERR_CERT_INVALID
Chrome error then focus the chrome window and type the letters thisisunsafe
- the window should refresh with the website.
During a website prelaunch, you may wish to preview the new website on an existing domain. To do this, you can update your host file, flush the DNS cache and open your browser.
Side-note: If you are on a Mac you can do this with:
/etc/hosts
sudo killall -HUP mDNSResponder
chrome://net-internals/#dns
to flush Chrome's DNSHowever, if the ne server is using a self-signed SSL certificate then you are often faced with something that looks like this:
Self-signed certificates occur when Let's Encrypt is used as the SSL provider. The most common way for Let's Encrypt to issue a certificate is to be able to access the website on the "live" domain name - which if you are previewing a new environment it won't be able to do. In which case, a self-signed certificate is often used.
Chrome, by default, prevents you from accessing sites with a self-signed or invalid SSL (blah blah, security) and, instead, displays the page shown above with no obvious way to bypass.
There is a way around this, however, by typing thisisunsafe.
It goes without saying (despite me now saying it), you should only do this if you trust the website and server. Using a Chrome extension like Website IP allows you to see which IP you are visiting to ensure it is trustworthy.
To use the "thisisunsafe" workaround
Reference link: Chrome: Bypass NET::ERR_CERT_INVALID for development
Read time: 2 mins
Tags:
]]>There is often the time, during a website content creation phase, where people have time and resources to spend writing and adapting content, but the new website is not yet set up. During this phase, we opt for writing content in Google Docs, as this prevents anyone being blocked - the clients can continue with content while we configure the CMS. It also means there is content readily available for designers and developers alike.
Using the method below, we create documents for each page of the website. This is generated from a Google Sheet (which is usually generating from a website sitemap/scraping tool).
The script has the ability to "mail merge". Any column titles surrounded by <<
quotes >>
will be replaced with the cell contents. It also has the ability to retroactively update variables/placeholders.
Some noteworthy features and/or differences to the original
<< >>
(e.g. Description will be <<description>>
)/*
* This overall script is designed to bulk create Google Docs from data within a Google Sheet.
*/
/**
* The main script
*/
function createDocsFromSpreadsheet()
{
// Log starting of the script
logEvent('Script has started');
const spreadsheet = getCurrentSpreadsheet(),
// Get current folder
folder = DriveApp.getFileById(spreadsheet.getId()).getParents().next(),
// Get Data sheet (first sheet)
dataSheet = spreadsheet.getSheets()[0];
let files,
template;
// Assign via destructuring
[files, template] = getOtherFilesFromFolder(folder, spreadsheet);
// Fire the create function
createDocuments(dataSheet, folder, files, template);
logEvent('Script has ended');
}
/**
* Get the currently active spreadsheet
*/
function getCurrentSpreadsheet()
{
var spreadsheet = SpreadsheetApp;
return spreadsheet.getActiveSpreadsheet();
}
/**
* Find all the files (except itself)
*/
function getOtherFilesFromFolder(folder, spreadsheet)
{
// Set up variables
let list = [],
template = false,
files = folder.getFiles();
// Loop through the variables
while (files.hasNext()){
file = files.next();
// Exclude ourselves
if(file.getId() === spreadsheet.getId()) {
continue;
}
// Create a object with data
let f = {
name: file.getName(),
slug: slugify(file.getName()),
id: file.getId()
};
// Exclude the template
if(f.slug === 'template') {
template = f;
continue;
}
// Keep the rest
list.push(f);
}
return [list, template];
}
/**
* Create the documents
*/
function createDocuments(dataSheet, folder, existingFiles, template) {
// Log starting createDocs Function
logEvent('Starting createDocuments Function');
// Get the formatted spreadsheet data
let headers,
data;
[headers, data] = formatRows(dataSheet.getDataRange().getValues())
if(!data.length) {
return;
}
for(let page of data) {
// Create a file name
let fileName = page.title ? page.title : 'Row: ' + page.row;
// Find or create a new file (maybe from the template)
logEvent('Looking for: ' + fileName);
let file = getOrMakeFile(fileName, existingFiles, template, folder)
if(!file) {
continue;
}
// Pppulate any variables - even if it's an existing sheet
let fileId = file.getId();
populateTemplateVariables(fileId, page);
// Get the column with a title of "Link"
let linkColumn = (headers.map(a => a.slug)).indexOf('link');
if(linkColumn >= 0) {
// If it exists, add the URL
dataSheet.getRange(page.row, (linkColumn + 1)).setFormula('=HYPERLINK("' + file.getUrl() + '","' + fileName + '")');
}
// refresh spreadsheet to links appear as soon as added
SpreadsheetApp.flush();
}
}
/**
* Find an existing file or make a new one
*
* If a "Template" file exists, use that
*/
function getOrMakeFile(fileName, existingFiles, template, folder)
{
let file = false;
let matchingFileList = existingFiles.filter(f => f.name === fileName),
existingFile = matchingFileList.length ? matchingFileList[0] : false;
if(existingFile) {
logEvent('Already exists: ' + fileName);
file = DriveApp.getFileById(existingFile.id)
} else if(template && template.id) {
logEvent('Creating from template: ' + fileName);
try {
file = DriveApp.getFileById(template.id).makeCopy(fileName, folder);
}
catch(e) {
// if failed set variable as false and Log
logEvent('Failed to copy the template: ' + e);
}
} else {
logEvent('Creating empty file: ' + fileName);
try {
file = DocumentApp.create(fileName)
file = DriveApp.getFileById(file.getId())
file.moveTo(folder);
}
catch(e) {
// if failed set variable as false and Log
logEvent('Failed to make a new file: ' + e);
}
}
return file;
}
function populateTemplateVariables(fileId, page) {
let fileContents = false;
try {
fileContents = DocumentApp.openById(fileId).getBody();
}
catch(e) {
// if failed set variable as false and Log
logEvent('Failed to open file: ' + e);
}
if(!fileContents) {
return;
}
for(let key in page) {
fileContents.replaceText('<<' + key + '>>', page[key]);
}
}
function formatRows(rows)
{
let headers = [];
for(let h of rows.shift()) {
headers.push({
title: h,
slug: slugify(h)
})
}
let data = [];
// Start at 2 so it matches with with the rows in the sheet
let rowCount = 2;
for(let row of rows) {
let d = {
row: rowCount
};
for (var col = 0; col < row.length; col++) {
d[headers[col].slug] = row[col];
}
data.push(d)
rowCount++;
}
return [headers, data];
}
/**
* Add the menu Item
*/
function onOpen() {
SpreadsheetApp.getUi()
.createMenu('Scripts')
.addItem('Create Docs from this Spreadsheet', 'createDocsFromSpreadsheet')
.addToUi();
}
/**
* Log event (if there is a sheet called Log)
*/
function logEvent(action) {
// Use the scripts logger
Logger.log(action);
// get the user running the script
var theUser = Session.getActiveUser().getEmail();
// get the relevant spreadsheet to output log details
var ss = SpreadsheetApp.getActiveSpreadsheet();
var logSheet = ss.getSheetByName('Log');
if(!logSheet) {
return;
}
// create and format a timestamp
var dateTime = new Date();
var timeZone = ss.getSpreadsheetTimeZone();
var niceDateTime = Utilities.formatDate(dateTime, timeZone, "dd/MM/yy @ HH:mm:ss");
// create array of data for pasting into log sheet
var logData = [niceDateTime, theUser, action];
// append details into next row of log sheet
logSheet.appendRow(logData);
}
/**
* Convert a string into a slug
*/
function slugify(str) {
str = str.replace(/^\s+|\s+$/g, ''); // trim leading/trailing white space
str = str.toLowerCase(); // convert string to lowercase
str = str.replace(/[^a-z0-9 -]/g, '') // remove any non-alphanumeric characters
.replace(/\s+/g, '-') // replace spaces with hyphens
.replace(/-+/g, '-'); // remove consecutive hyphens
return str;
}
Read time: 11 mins
Tags:
]]>Problem: I was getting a failing layer push to a Docker registry, even when the image was built without a cache.
Solution: It ended up being a layer which was too big and timing out - I identified the problem layer with Dive and split out my RUN
instructions.
Recently, on a project which deploys via Docker, I started getting a Docker layer which wouldn't push. Nothing significant had changed with the Docker file, image or the base files in the repository.
It manifested itself in the logs (both locally and via Gitlab CI) with a forever looping retry:
ec7ca1533d1c: Retrying in 5 seconds
ec7ca1533d1c: Retrying in 4 seconds
ec7ca1533d1c: Retrying in 3 seconds
ec7ca1533d1c: Retrying in 2 seconds
ec7ca1533d1c: Retrying in 1 second
ec7ca1533d1c: Retrying in 10 seconds
ec7ca1533d1c: Retrying in 9 seconds
ec7ca1533d1c: Retrying in 8 seconds
ec7ca1533d1c: Retrying in 7 seconds
ec7ca1533d1c: Retrying in 6 seconds
ec7ca1533d1c: Retrying in 5 seconds
ec7ca1533d1c: Retrying in 4 seconds
ec7ca1533d1c: Retrying in 3 seconds
ec7ca1533d1c: Retrying in 2 seconds
ec7ca1533d1c: Retrying in 1 second
Each time it got down to 1 second a failed, the starting number would increase. Eventually, the whole process failed.
I originally thought it would be a caching issue, so I cleared CI caches, removed remote containers & built my image with the --no-cache
option on the CLI. However, none of this seemed to make a difference.
Speaking to a colleague, he mentioned it was often layer size which timed out and prevented pushing. Upon doing research, I found a tool called Dive, which allowed you to inspect each layer (and the filesystem differences) of a Docker image: See Dive on Github.
It took me a while to identify the problem layer, as the order they appeared in the logs was not necessarily the order they were built. I also hadn't clocked that the first 12 characters in the logs were the first 12 characters of the sha256 hash of each layer. It seems obvious now, but when you're in the rage haze, it was easy to overlook.
Dive lists out the "Digest" (sha256) of each layer and I found that if you had the first 12 characters in the terminal search, they immediately highlighted when you got to the layer.
I identified the problem layer being the one where I update and install dependencies in our base Docker image:
### Install required packages
RUN DEBIAN_FRONTEND=noninteractive apt-get update -y && \
DEBIAN_FRONTEND=noninteractive apt-get upgrade -y && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
apache2 \
apg \
brotli \
bzip2 \
catdoc \
cron \
curl \
default-mysql-client \
exim4 \
gawk \
gifsicle \
git \
htop \
imagemagick \
jpegoptim \
less \
locales \
nfs-common \
ntp \
php7.4-common \
php7.4-curl \
php7.4-fpm \
php7.4-gd \
php7.4-intl \
php7.4-json \
php7.4-mbstring \
php7.4-mysql \
php7.4-simplexml \
php7.4-xml \
php7.4-zip \
pngcrush \
poppler-utils \
rsync \
snmp \
sudo \
supervisor \
sysstat \
vim \
webp \
wget \
zopfli
My thinking behind combining the update & installs in one, ath the time, was to reduce the amount of layers Docker created. I didn't consider the size of each layer ever being an issue.
With some trial an error of splitting up commands, I eventually landed on something like the following. I didn't identify exactly what package was causing the issue (it was late night), but splitting it up into 4 sections seemed to create small enough images that they could be pushed.
It's worth noting were are looking to deprecate this Docker image due to performance, so I didn't want to sink too much time into something which will be replaced soon
RUN DEBIAN_FRONTEND=noninteractive apt-get update -y
RUN DEBIAN_FRONTEND=noninteractive apt-get upgrade -y
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y \
apache2 \
apg \
brotli \
bzip2 \
catdoc \
cron \
curl \
default-mysql-client
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y \
exim4 \
gawk \
gifsicle \
git \
htop \
imagemagick \
jpegoptim \
less \
locales \
nfs-common \
ntp
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y \
php7.4-common \
php7.4-curl \
php7.4-fpm \
php7.4-gd \
php7.4-intl \
php7.4-json \
php7.4-mbstring \
php7.4-mysql \
php7.4-simplexml \
php7.4-xml \
php7.4-zip
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y \
pngcrush \
poppler-utils \
rsync \
snmp \
sudo \
supervisor \
sysstat \
vim \
webp \
wget \
zopfli
The main reason for this post is to (hopefully) help someone. I spent a few hours hunting round the internet for similar issues and no-one mentioned it could be the size of your Docker layers.
Read time: 3 mins
Tags:
]]>Just over a week ago, I returned from Germany after attending the multi-day, multi-tracked TYPO3 conference: TYPO3 Developer Days. This conference marked several firsts for me: my first TYPO3-specific conference, my first visit to Germany, my first time traveling alone, and my first time drinking Pilsner for four consecutive evenings (for those who don't know, I review beer in my spare time).
Side note: I found the unfiltered Pilsner had a much more complex and tasty flavor compared to the straight filtered beer.
The conference took place in Karlsruhe, in the south-west of Germany. Although this event brought me to Germany, I didn't see much of the country. My journey consisted of a 6:15am 3-hour National Express ride to Heathrow airport, a wait, a flight and another wait followed by a taxi and train to reach the hotel. I did, however, manage to briefly explore the woodland opposite the hotel, searching for (and finding) a couple of Geocaches after breakfast one morning: Oberwaldtradi - OWT #02 🌲🌳 and Oberwaldtradi - OWT #04 🌲 🌳.
We stayed at the Genohotel, which offered great spaces for socializing and drinking the aforementioned Pilsner, along with a lovely outdoor area that prevented feeling cooped up inside. The food was amazing; I enjoyed every meal and experienced plenty of new dishes, though I couldn't tell you what most of them were called as all the labels were in German.
The conference itself was nothing short of inspirational. Most talks, as expected, focused on TYPO3 itself, its capabilities, and upcoming features. While you do come away feeling like you've been "drinking the Kool-aid" a little, even after the honeymoon phase, I still find myself excited about many cool features coming in future releases.
Two TYPO3-centric talks that particularly excited me were Benjamin Franzke's presentation about upcoming Site Sets and Simon Praetorius' talk on the Vite extension and plugin.
With Site Sets, I've already encountered a couple of scenarios in the last week where they would have been beneficial, mainly around multi-site setups and sharing configuration and plugins.
Vite is something we've been considering at Liquid Light, as we've been using Gulp for over 10 years (Advanced Gulp File is a blog post I wrote when we'd just migrated). Despite Gulp 5 recently being released, we've been looking to optimize our front-end builds. The TYPO3 extension and accompanying NPM package Simon presented are essential for making this switch.
As for non-TYPO3 focused talks, Zack Lott gave a great overview of security scanners and bravely conducted a flawless live demonstration. It provided much food for thought, and I've added Semgrep and Trivy to my to-do list.
Christian Heilmann delivered a fantastic, thought-provoking talk on making the web simpler and more accessible.
This isn't to belittle the other talks, of course - every single one I attended had great takeaways. I furiously scribbled notes during each talk, which I wrote up at the end of each day. They might not mean much to anyone else, but they serve as a nudge to remind me what I learned. If you're interested, you can find them here:
The talks are only half of what makes a conference great. The people truly make conferences special. Everyone I met was kind and welcoming, and I hugely appreciate the effort everyone made to speak in English so I could join in the conversations. I was one of six who had travelled from the UK (a team from Prater Raines, TYPO3 Tom, and Zack), and between us, we couldn't speak much German beyond the basics. I was bowled over by how perfectly everyone spoke English.
I got to meet many incredible people, and it was great that the speakers stayed and mingled so we could pick their brains over a few evenings. There nearly as many (undocumented) takeaways and learnings from the evenings as there were during the day.
All in all, it was a fantastic, educational, and welcoming atmosphere. As long as time and circumstances allow, I will certainly be back for future TYPO3 Developer Days.
Read time: 3 mins
Tags:
]]>TYPO3 Developer Days Day 3 notes.
See other days:
T3DD Schedule Link / Slides / Video (tbc)
T3DD Schedule Link / Slides / Video (tbc)
npm install --save-dev vite vite-plugin-typo3
)
composer.json
to find extensions)Configuration/ViteEntrypoints.json
for pathscomposer req praetorius/vite-asset-collector
)
manifest.json
file generated by Vite which TYPO3 consumesglob
to find all the files (set {eager: true}
)npm add -D sass
@site_package
)T3DD Schedule Link / Slides (tbc) / Video (tbc)
srcset
- multiple image sizes and resolutions, browser picks. Screen ration & device pixel ratio is taken into accountsrcset
has sizes
attribute - tells the browser how much space the image will usepicture
and source
tags - for when you need art direction
srcset
inside source tagpicture
and source
can't be styled, you style the img
tag insteadsrcset
it will tell you the current sourceT3DD Schedule Link / Slides / Video (tbc)
object-fit
with videodefer
on script tags<details>
will auto expand of the text is insidecolour-scheme: light dark;
(MDN link)T3DD Schedule Link / Slides (tbc) / Video (tbc)
Read time: 5 mins
Tags:
]]>My notes, links and useful points from second day of TYPO3 Developer Days.
See other days:
T3DD Schedule Link / Slides (tbc) / Video (tbc)
T3DD Schedule Link / Slides (tbc) / Video (tbc)
Because this was a case study, it was demonstrating what was achieved so there weren't too many notes.
T3DD Schedule Link / Slides (tbc) / Video (tbc)
l10n_parent
field is used everywhere for localisation config, except tt_content
which uses i18n_parent
uid
. The UID of the translated page is put into _OVERLAY_UID
t3ver_wsid
- What workspace is this?t3ver_oid
- The ID of the original/online/live paget3ver_state
- Is this a change/deletion/additionsys_language_uid
wouldn't be neededPageRepository
has uses the Context
API
cObj
uses PageRepository
PageRepository
has plenty of PSR-14 events to useBackendUtility
for getting Workspace and Language overlays in BEPageRepository
in the BERelationHandler
to read & write related DBsDataHandler
for writingT3DD Schedule Link / Slides / Video (tbc)
brew install semgrep
brew install trivy
T3DD Schedule Link / Slides (tbc) / Video (tbc)
ctrl
and columns
array keys'type' => 'select'
- even in a TCA overrideT3DD Schedule Link / Slides (tbc) / Video (tbc)
GET
Param
htmlspecialchars
json_encode
<f:format.raw>
and <f:format.htmlentitiesDecode>
do not sanitise<f:format.html>
or <f:sanitize.html>
insteadlib.parseFunc
)sqlmap
- Runs common SQL injection commandsstrict
cookies where possiblelax
is still a bit stricter than none
T3DD Schedule Link / Slides / Video (tbc)
git pull
on live server)Read time: 6 mins
Tags:
]]>TYPO3 Developer Days is taking place in Germany over the next few days. As I attend each talk, I've been writing bullet points in my notebook of noteworthy things, things I agree with, things to remember or things to look up later.
The following post is those bullet points in a digital format. They probably won't make sense to anyone, but serve as a nudge for future me and stop them from living and dying in my notebook. They are also my twist and interpretation of what was said - some of it is verbatim, but other notes are what I took from it.
See other days:
T3DD Schedule Link / Slides (tbc) / Video (tbc)
T3DD Schedule Link / Slides / Video (tbc)
$
replace with documentQuerySelector(All)
attr
with getAttribute
/setAttribute
.data('*')
- use regex to replace with .datatset.$1
closest
exists in native JS toonew RegularEvent('change', function() {}).delegateTo
(ref)AjaxResponse
class to use instead of $.ajax
T3DD Schedule Link / Slides / Video (tbc)
Configuration/Sets/[SET NAME]
Configuration/Sets/*/config.yaml
name
and label
settings.yaml
is available in v12 inside the config/sites
folder
constants.typoscript
getSettings()
)setup.typoscript
and page.tsconfig
inside the site set folder will be loaded automaticallyEXT:form/Configuration/TypoScript/setup
) and, instead, import the site set
sys_template
)constants.typoscript
(although still compatible)T3DD Schedule Link / Slides (tbc) / Video (tbc)
T3DD Schedule Link / Slides / Video (tbc)
This was a more practical talk that went a little over my head, hence the small notes
createStub
@template
) to better testing - also this helps with IDE hintingT3DD Schedule Link / Slides (tbc) / Video (tbc)
Read time: 5 mins
Tags:
]]>