NoteThis post is meant as a reference for myself. I only published it in case someone else might find it interesting. I did not spend much time on this post.
I do host some services on a few rented VPS servers and on my "home lab", which is just an old desktop that is running in the basement. When I got into self-hosting, I decided I would host everything exclusively in docker, which has served me pretty well over the last few years. In the last while I have learned a lot about Kubernetes, and am strongly considering switching my "simple" hosting setup for a more complex Kubernetes cluster. So before I do that I want to write down what my current setup looks like.
As mentioned, everything is hosted in docker containers. Generally, I try to keep everything in docker-compose, since this allows me to specify the settings of the container once, and easily modify it later. To have multiple services available on port 80 and 443, I use the Traeffik (Software) reverse proxy. I use Traefik without a config file, meaning it pulls the routes and rules directly from the labels of the running containers on the VPS. This makes it easy to launch a new service and have its reverse proxy config directly in the docker-compose file.
Since many services use a database, and Postgres seems to be supported by many open-source projects, I decided to have a central Postgres instance running in a docker container. This allows me to back it up with a simple cron job in a single place. If a service does not support Postgres, I specify its database directly in the docker-compose file.
Almost all services use disk access for either config, local files, or similar. I do have a docker folder that is the root of all locally stored files.
When I first started, I configured everything by hand, and documented how, why, and what I did. However I was not happy with this, I could not test it out and it was prone to errors. Therefore, I decided to use Ansible to set up the server and install all dependencies. This worked well, so well that I decided that Ansible was good enough to use to operate the entire pipeline, even to automate the deployment of the services.
I do have an ansible role per service, with its configuration (mostly) as ansible YAML files, and the docker-compose files and other config files as ansible templates. This worked great, with a single ansible-playbook command I can make sure everything is running and has the right config. For most services, I even built logic to make sure that when the docker-compose file or a config file changes, the container is restarted.
I am quite happy with this system in general. Everything runs stable, backups are easy and automated, and deployments for services that are already configured are a breeze. I can keep the whole "description" of what is running in a single git repo, and make changes by editing config files. This is a huge step up from manually deploying and keeping track of what docker commands to use for what service.
Recently I noticed some pain points.
In general, I seem to have built a worse subset of Kubernetes myself, just without the robustness that makes Kubernetes so interesting.
I am planning to replace docker with Kubernetes, specifically K3s (Software), a very lightweight and mostly "batteries included" Kubernetes distribution. Ansible will stay, but only as a tool to set up and configure the OS, install dependencies, and install and run K3S. Deployment of services I either want to do directly using the kubectl command line tool, or more likely using ArgoCD, a project that pulls Kubernetes manifests from a Git repository and automatically deploys it.
For the configuration, I will take a look at Helm (Software).
]]>I recently had to find a way to delete a folder using Ansible that was being created by Docker. The folder had a path like ~/docker/myservice
. Since docker had created it as part of a volume, the folder did not belong to the current user. So deleting the folder using normal permissions failed.
Deleting with elevated permission on the command line is easy: The command sudo rm -rf ~/docker/myservice
performs the rm
operation as the root user. In bash, this will delete the docker/myservice
folder in the user's home directory, but when doing the equivalent in Ansible, this won't work!
# This does not work!
- name: Delete the folder using root permissions
become: true
ansible.builtin.file:
path: "~/docker/myservice"
state: "absent"
This code will try to delete the file /user/root/docker/myservice
, which is not what we wanted.
The bash version works because the shell first resolves the tilde in the argument to the current users' directory before calling the sudo command. In Ansible, we first switch to the root user and only then the tilde is resolved: this time to the home directory of the root user.
To circumvent this, we can manually resolve the path to an absolute path. Unfortunately, I have not found a straightforward way to do this in Ansible, however the bash command readlink -f <path>
does exactly this. To use it in Ansible, we can use the following configuration:
- name: Get absolute folder path
ansible.builtin.command:
cmd: "readlink -f ~/docker/myservice"
register: folder_abs
changed_when: False
- name: Debug
debug:
msg: "{{folder_abs.stdout}}" # prints /user/tim/docker/myservice
- name: Delete the folder using root permissions
become: true
ansible.builtin.file:
path: "{{folder_abs.stdout}}"
state: "absent"
With this Ansible script, we manually resolve the absolute path and use it to delete the folder using root permissions. If you know of an easier way to resolve to an absolute path, please let me know!
]]>My first real programming experience was with a scripting language called AutoHotkey. This was before I was fluent enough in English to join the English-speaking community around this language. But luckily, there was an official German forum. It was really active, not only consisting of newcomers to the language but also veterans. When I joined this forum in my teens I quickly went from just asking beginner questions, to enjoying helping other beginners, that asked the same questions as I did previously. I got better at the language, learned new programming concepts all through reading posts, helped others, and shared my projects on this forum. I got excited when I saw a post from other users that I recognized. When AutoHotkey got forked and the new interpreter introduced classes and object-oriented programming, I felt in way over my head. Since I was not alone in this, one person took the time to write an incredibly detailed guide as a forum post. I recently found this post printed on paper. I had printed it right before going on vacation since I desperately wanted to learn but knew I was not going to have access to the internet for a while. Unfortunately, the German forum has since been discontinued, but some of the pages are still up on the Way back machine.
Another community I used to be really active in, was for a small indie roleplaying game called Illarion. Again, the community relied heavily on a forum for communications. This time it was used for players to engage in "out of character" communication, as well as a way to simulate a metaphorical bullet board in the game town square where characters could leave notes for each other. Since the game was closely inspired by TTRPGs like D&D, the role-playing part was more important than the in-game mechanics. The forum allowed characters to interact with each other that were not online at the same time. Again, I got really invested in this community, even going so far as joining other guild-specific forums.
I eventually moved on from both of those amazing communities, because my interests changed. I left the AutoHotkey community because I started to get more involved with other programming languages, and I left the Illarion community because I (with the support of my parents) was looking for a less time-intensive game. Unfortunately, I never happened to find another online community like those two ever again...
Sometime later I joined Reddit and was amazed. It felt like a place where all communities come together on a single site. No need to check on multiple websites for new posts, everything neatly together in a single website, accessible on a single (third party) app. I remember wondering why people were still using forums when Reddit was so much simpler.
Jumping to the present and I realize that I was wrong. Even though I am subscribed to a bunch of communities on Reddit, I barely comment on any posts and posted even less. While I am a community member on record, I do not feel like one. The wealth of communities, as well as the incentive to go on the front page to see the most popular posts of the whole site, made me want to open Reddit, but it did not give me the feeling of belonging. I rather felt like a spectator that from time to time gathers the courage to shout his own ideas into the ether.
Side note: Discord comes much closer to the feeling of community. However, the nature of chat makes the interactions fleeting, being in a chat room with a few hundred other people, where every message is just a few sentences at most does not lead to the same connections. No one expects their message to be read again after a few days.
Now the company behind Reddit started to lose the goodwill of the users. While I don't think Reddit will die anytime soon, I think there are a lot of people looking for alternatives. And the best alternative to the website that killed forums is... forums.
While forums largely still work the same as they did 15 years ago, there have been developments that might make them more feasible for our desire to have everything accessible on a single site or on a single app. Last time a social media company, Twitter, annoyed its user base, the fediverse, and more specifically Mastodon, started to go more mainstream. This time I hope there will be other projects that profit. I have heard people mentioning the projects Kbin and Lemmy, both forum-like platforms that implement the ActivityPub specification. Same as Mastodon, this means users are able to interact with users on other instances. Even further, this should also allow users of any federated social network, such as Mastodon, to post and comment on any federated forum. Even established forum software such as Flarum and nodeBB are considering adding federation support.
I really hope that forums make a comeback, not only because of the nostalgia but also because to me it feels like a more sustainable way to build a community. And now with the possibility to federate via the fediverse, a forum doesn't have to be a walled garden of members any more. In the end, most importantly I hope people are still finding communities they can be as passionate about as I was, without any corporate overlords trying to keep their eyeballs on ads as long as possible.
]]>Planning is the process of finding a path in a planning task from the initial state to a goal state. Multiple algorithms have been implemented to solve such planning tasks, one of them being the Property-Directed Reachability algorithm. Property-Directed Reachability utilizes a series of propositional formulas called layers to represent a super-set of states with a goal distance of at most the layer index. The algorithm iteratively improves the layers such that they represent a minimum number of states. This happens by strengthening the layer formulas and therefore excluding states with a goal distance higher than the layer index. The goal of this thesis is to implement a pre-processing step to seed the layers with a formula that already excludes as many states as possible, to potentially improve the run-time performance. We use the pattern database heuristic and its associated pattern generators to make use of the planning task structure for the seeding algorithm. We found that seeding does not consistently improve the performance of the Property-Directed Reachability algorithm. Although we observed a significant reduction in planning time for some tasks, it significantly increased for others.
@phdthesis{bachmann2023,
author = {Bachmann, Tim},
year = {2023},
month = {05},
title = {Automated Planning using Property-Directed Reachability with Seed Heuristics},
doi = {10.13140/RG.2.2.11456.30727},
type = {Master's Thesis},
school = {University of Basel}
}
]]>In one of my last blog posts I set up WeeChat in docker, which works mostly pretty great for me so far. Although, it started to bug me that I felt the need to regularly check IRC in case I missed someone potentially tagging or private-messaging me. While looking around at how I could be notified on mentions and private messages, I found the trigger plugin. A powerful plugin that comes pre-installed on WeeChat. It lets the user specify a WeeChat command that will be executed when a specific event occurs. This plugin is probably powerful enough to build a small IRC bot, directly in WeeChat.
Also, I recently found the web service ntfy.sh. It sends push notifications whenever you send an HTTP post request to a certain URL. I already have ntfy.sh installed on my android phone, and I also found a minimal and lightweight desktop client.
I managed to set a WeeChat trigger up that fires every time I get mentioned (highlighted in WeeChat terminology), and a trigger that fires every time I get a private message. Both of those triggers execute the /exec
command which runs an arbitrary shell command. The exec command runs the wget
program to send a post request to the ntfy.sh server, which in turn sends a notification to all apps that subscribe to the same URL as the post request was sent. I would usually use the curl program for this instead of wget, but the docker default docker image doesn't contain a curl install.
Here you can see the two /trigger
commands:
trigger on mention
/trigger addreplace notify_highlight print '' '${tg_highlight}' '/.*/${weechat.look.nick_prefix}${tg_prefix_nocolor}${weechat.look.nick_suffix} ${tg_message_nocolor}/' '/exec -norc -nosw -bg wget -O- --post-data "${tg_message}" "- -header=Title: New highlight: ${buffer.full_name}" https://ntfy.sh/my_ntfy_topic_1234'
trigger on private message
/trigger addreplace notify_privmsg print '' '${tg_tag_notify} == private && ${buffer.notify} > 0' '/.*/${weechat.look.nick_prefix}${tg_prefix_nocolor}${weechat.look.nick_suffix} ${tg_message_nocolor}/' '/exec -norc -nosw -bg wget -O- --post-data "${tg_message}" "--header=Title: New private message: ${buffer.full_name}" https://ntfy.sh/my_ntfy_topic_1234'
In case you don't just want to copy and paste some random command from the internet into your WeeChat (which you shouldn't do anyway), I will try to explain the trigger command that fires when you get mentioned in a message:
Let's first look at the trigger command itself:
/trigger addreplace <name> <hook> <argument> <condition> <variable-replace> <command>
We call the /trigger
command with the addreplace
subcommand. This subcommand will either register a new trigger or replace it if one with the same name already exists.
name
- This argument is self-explanatory, the name of the trigger. In our case I called it notify_highlight
, but you could call it whatever you want.hook
- This argument specifies which hook or event the trigger should listen for. WeeChat is built as an event-driven platform, so pretty much anything from mouse movements to IRC messages are handled via events. In this case, we want to trigger on the print
event, which is fired every time a new message gets received from IRC.argument
- The argument is needed for some hooks, but not for the print
hook, so we are going to ignore that one for now and just set it to an empty string ''
.condition
- The condition must evaluate to true
for the trigger to fire. This is helpful because the print
trigger fires for every new message, but we only want to be notified when the new message mentions our nick. The condition for this is ${tg_highlight}
. You can find the list of variables that you can access with the command /trigger monitor
, which prints all variables for every trigger that gets executed.variable-replace
- This took me a while to understand. This command is used to manipulate data and save it to a variable. The syntax is inspired by the sed command. Explaining it fully is out of the scope of this blog post, but you can take a look at the docs. In our example, we replace the whole content of the variable tg_message
with the format string ${weechat.look.nick_prefix}${tg_prefix_nocolor}${weechat.look.nick_suffix} ${tg_message_nocolor}
which results in a sting like <tiim> Hello world!
.command
- The last argument is the command that gets executed whenever this trigger fires. In our case, we use the /execute
command, which starts the wget command which in turn sends a post request to ntfy.sh. Make sure you set the ntfy topic (the part after https://ntfy.sh/
) to something private and long enough so that nobody else is going to guess it by accident.Don't forget to subscribe to the ntfy topic on your phone or whatever device you want to receive the notification on.
The possibilities with the trigger plugin are endless, I hope this inspires you to build your own customizations using weechat.
]]>I recently ran into the problem that when the Cisco AnyConnect VPN is connected, the network connectivity inside of WSL2 stops working. I found a bunch of solutions online for it: most just focus on the fact that the VPN DNS settings are not applied inside WSL2 and therefore no domain names can be resolved. I additionally had the issue that the WSL2 network interface somehow gets disconnected when the VPN starts.
I will show you how I fixed this problem for me and explain what the commands I used do. This post is mostly for my reference, but I hope it helps anyone else as well.
Let's check first if we have internet access inside WSL2. For this run the ping command with an IP address as a destination:
ping 8.8.8.8
If you get something like this as the output, your internet connection is fine, and it's just the DNS nameserver addresses that are misconfigured, you can jump forward to Solution 2.
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=108 time=4.53 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=108 time=3.94 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=108 time=3.97 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=108 time=3.78 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=108 time=3.77 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=108 time=3.76 ms
64 bytes from 8.8.8.8: icmp_seq=7 ttl=108 time=3.81 ms
If you don't get any responses from the ping (i.e. no more output after the PING 8.8.8.8 (8.8.8.8) ...
line), you need to configure the WSL and the VPN network adapter metric. Go to Solution 1.
To check if the DNS is working, we can again use the ping command, this time with a domain name:
ping google.com
If you get responses, the DNS and your internet connection are working! If not go to Section 2.
Run the following two commands in PowerShell as administrator:
Get-NetAdapter | Where-Object {$_.InterfaceDescription -Match "Cisco AnyConnect"} | Set-NetIPInterface -InterfaceMetric 4000
Get-NetIPInterface -InterfaceAlias "vEthernet (WSL)" | Set-NetIPInterface -InterfaceMetric 1
Let me explain what those two commands do. Both follow the same pattern of listing all network adapters, selecting a specific adapter from the list and setting its "metric".
You can imagine an adapter as a virtual network port on the back of your pc or laptop. But instead of sending packets through the wire, the driver for a specific port can do whatever it wants with those packets, in the case of a VPN, the packets get encrypted and forwarded to the internet via another adapter.
The InterfaceMetric is a value associated with each adapter that determines the order of those adapters. This allows windows to determine which adapter to prefer over another one.
By setting the interface metric of the Cisco adapter to 4000 and the metric of the WSL adapter to one, we allow the traffic from WSL to flow through the Cisco adapter. To be honest I do not exactly understand why this works but it does.
Setting the DNS servers is, unfortunately, a little bit more involved than just running two commands, we need to edit the files /etc/wsl.conf
and /etc/resolv.conf
, and restart wsl in between. Let's get to it:
Edit the file /etc/wsl.conf
inside of WSL2 using a text editor. I suggest doing this through the terminal since you need root permissions to do that:
sudo nano /etc/wsl.conf
# feel free to use another editor such as vim or emacs
Most likely this file does not exist yet, otherwise, I suggest you create a backup of the original file to preserve the settings.
Add the following config settings into the file:
[network]
generateResolvConf = false
This will instruct WSL to not override the /etc/resolv.conf
file on every start-up. Save the file and restart WSL with the following command so that the changed config takes effect:
wsl.exe --shutdown
Now open a PowerShell terminal and list all network adapters with the following command:
ipconfig /all
Find the Cisco AnyConnect adapter and copy the IP addresses in the DNS-Server field. We will need those IPs in the next step.
Start WSL again and edit the /etc/resolv.conf
file:
sudo nano /etc/resolv.conf
Most likely there is already something in this file, you can discard it. When undoing the changes, WSL will automatically regenerate this file anyway, so you don't need to back it up.
Delete all the contents and enter the IP addresses you noted down in the last step in the following format:
nameserver xxx.xxx.xxx.xxx
Put each address on a new line, preceded by the string nameserver
.
Save the file and restart WSL with the same command as above:
wsl.exe --shutdown
Now open up WSL for the last time and set the immutable flag for the /etc/resolv.conf
file:
chattr +i /etc/resolv.conf
And for the last time shut down WSL. Your DNS should now be working fine!
I did not have a need to undo the steps for Solution 1
, and I'm pretty sure the metric resets after each system reboot anyway so there is not much to do.
To get DNS working again when not connected to the VPN run the following commands:
sudo chattr -i /etc/resolv.conf
sudo rm /etc/resolv.conf
sudo rm /etc/wsl.conf
wsl.exe --shutdown
This will first clear the immutable flag off /etc/resolv.conf
, and delete it. Next, it will delete /etc/wsl.conf
if you have a backup of a previous wsl.conf
file, you can replace it with that. At last, we shutdown WSL again for the changes to take effect.
Unfortunately, this is quite a procedure to get a VPN to work with WSL2, but I'm hopeful that this will soon not be necessairy anymore.
]]>Today I ran into the an error trying to deploy my go app in docker, where the container refused to start with the extremely helpful message exec /app/indiego: no such file or directory
. I had removed the CGO_ENABLE=0
variable from the Dockerfile, because I needed to enable cgo for a library. What I found out was that when enabling cgo, the resulting binary is not statically linked anymore and now depends on libc or musl. Since the scratch
image does not contain literally anything, the binary can't find the libraries and crashes with the aforementioned error.
To include libc into the container, I simply changed the base image from scratch
to alpine
, which includes libc. This makes the image slightly larger but this seemed way easier than trying to include libc directly.
As a bonus I got to delete the /usr/share/zoneinfo
and ca-certificates.crt
files, and rely on those provided by alpine.
You can see the commit to IndieGo here.
]]>I have recently gotten interested in IRC for some reason and have been looking for a client that I like. I have used HexChat in the past, but I don't really fancy having yet another communications program running on my PC next to discord, zoom, telegram and thunderbird. I have been trying to use the IRC feature of thunderbird, but even though it works, it feels very much like an afterthought.
The one client I have seen mentioned a lot is WeeChat (not to be confused with WeChat, the Chinese instant messenger). WeeChat runs in the terminal as a TUI and after a while of getting used to (and after enabling 'mouse mode') it seems intuitive enough.
The nice thing about WeeChat running not as a graphical application, is that it makes it possible to run on a server and access it from anywhere over ssh.
INFOExcept on mobile devices, but weechat has mobile apps that can connect to it directly.
Since I pretty much host all my selfhosted software in docker on a VPS, I was looking if someone already published a docker image for WeeChat. There is a bunch of them, but only weechat/weechat (the official image) is still updated regularly. The docker hub page does not have any documentation, but I managed to find it in the weechat/weechat-container github repo.
As it says in the readme on github, you can start the container with
docker run -it weechat/weechat
which will run weechat directly in the foreground.
InfoDon't skip the
-it
command line flags. The-i
or--interactive
keeps stdin open, which is required to send input to weechat. Weechat also closes immediately if the stdin gets closed, which took me a while to figure out. The-t
or--tty
flag is required to provide a fake tty to the container. I don't really understand what that means but without this you won't see the user interface of weechat.
Running in the foreground is not really that helpful if we want to run weechat on a server, so we need to detach (let it run in the background) from the container with the -d
or --detach
flag. It also helps to specify a name for the container with the --name <name>
argument, so we can quickly find the container again later. The docker command now looks like this:
docker run -it -d --name weechat weechat/weechat
When we run this command, we will notice that weechat is running in the background. To access it we can run docker attach weechat
. To detach from weechat without exiting the container, we can press CTRL-p CTRL-q
as described in the docker attach reference
I noticed that there are two versions of the weechat image: a debian version and an alpine linux version. Generally the Alpine Linux versions of containers are smaller than the Debian versions, so I decided to use the alpine version: weechat/weechat:latest-alpine
.
With this we are practically done, but if we ever remove and restart the container, all of the chat logs and customisations to weechat will be gone. To prevent this we need to add the config and log files to a volume.
I generally use the folder ~/docker/(service)
to point my docker volumes to, so I have a convenient place to inspect, modify and back up the data.
Let's create the folder and add the volume to the docker container. I also added the --restart unless-stopped
flag to make sure the container gets restarted if it either exits for some reason of if docker restarts.
mkdir -p ~/docker/weechat/data
mkdir -p ~/docker/weechat/config
docker run -it -d --restart unless-stopped \
-v "~/docker/weechat/data:/home/user/.weechat" \
-v "~/docker/weechat/config:/home/user/.config/weechat" \
--name weechat weechat/weechat:latest-alpine`
Running this command on the server is all we need to have weechat running in docker.
But how do I quickly connect to weechat? Do I always have to first ssh into the server and then run docker attach?
Yes but, as almost always, we can simplify this with a bash script:
#!/usr/bin/env bash
HOST=<ssh host>
ssh -t "${HOST}" docker attach weechat
This bash script starts ssh with the -t
flag which tells ssh that the command is interactive.
Copy this script into your ~/.local/bin
folder and make it executable.
nano ~/.local/bin/weechat.sh
chmod +x weechat.sh
And that's it! Running weechat.sh
will open an ssh session to your server and attach to the weechat container. Happy Chatting!
If you liked this post, consider subscribing to my blog via RSS, or on social media. If you have any questions, feel free to contact me. I also usually hang out in ##tiim
on irc.libera.chat. My name on IRC is tiim
.
Update 2022-01-18]]>I have found that at the beginning of a session, the input to weechat doesn't seem to work. Sometimes weechat refuses to let me type anything and/or doesn't recognize mouse events. After a while of spamming keys and
Alt-m
(toggle mouse mode), it seems to fix itself most of the time. I have no idea if thats a problem with weechat, with docker or with ssh, and so far have not found a solution for this. If you have the same problem or even know how to fix it, feel free to reach out.
Update May 2024Storj has quietly removed their free plan and seems to hold all images on my website for ransom until I pay for the premium plan. They have not notified my about this happening.
If you pay for the premium version Storj might still work for you, but after this, I personnaly won't trust them with my data again!
For a while now I have been looking for a way to put images on my website. At first I just embedded them in the website github repository, but this just doesn't feel right. Putting one or two image assets in a codebase is one thing, putting an ever growing list of images in there feels icky to me. For this reason I put the last few cover images of my blog posts on the imgur platform. This is slightly cleaner from a git standpoint but now i have to trust imgur to keep serving these images. Additionally, as I recently discovered, this seems to be against imgurs TOS:
[...] Also, don't use Imgur to host image libraries you link to from elsewhere, content for your website, advertising, avatars, or anything else that turns us into your content delivery network.
Finally when I started indie-webifying my website, and was implementing the micropub protocol (which I will blog about at a later time), I decided that it was at the time to host the images on a platform that was meant to do that. I looked at a few storage providers such as cloudinary and S3 based object storage and landed on Storj.io, mostly because of the generous free tier, which should suffice for this little blog for quite a while.
One thing that bothered me slightly was that all storage providers I looked at charge for traffic. It's not the fact that it's an additional expense (if your not in the free tier anymore) that bothers me, but the fact that I don't have any control over how much this will cost me. In all likelihood this will never cost me anything since this blog has not much traffic, but if a post were to go viral (one can dream...), this could result in a surprise bill at the end of the month.
To help with the traffic costs I decided to try to use the free CDN functionality of Cloudflare to reduce the traffic to Storj. In this blog post I will describe how I did that.
If you are in a similar situation as me, and just want to have somewhere to host your images for a personal website or to share images or screenshots as links while still having control over all your data, this could be a good solution.
If you want to build a robust image pipeline with resizing and image optimization, or you are building an enterprise website this is probably not the right way. You should take a look at cloudinary or one of the big cloud providers.
To use Cloudflare as a CDN, you need to have Cloudflare setup as your DNS host for the domain you want to serve the images from. Even if you just want to use a subdomain like media.example.com
, the whole example.com
domain needs to be on cloudflare. For me this was not much of an issue, I followed the instructions from cloudflare and pointed the nameserver of my domain to cloudflare. Although I did have an issue during the migration, which resulted in my website being down for two hours. But I'm pretty sure this was caused by my previous nameserver provider.
I assume you already have an account at storj.io. The next step is creating a bucket for your images. A bucket is just a place for your files and folders to live in storj, just like in any other S3 compatible storage provider. (Actually there are no folders in storj and other S3 services, the folders are just prefixes of the filenames). When creating a bucket, make sure you save the passphrase securely, such as in your password manager. Whenever storj asks you for the passphrase, make sure you don't let storj generate a new one! Every new passphrase will create access to a new bucket.
The next step is installing the uplink cli. Follow the quick start tutorial to get an access grant. Remember to use the same passphrase from above. Now follow the next quickstart tutorial to add the bucket to the uplink cli. The file accessgrant.txt
in the tutorial only contains the access-grant string that you got from the last step.
Finally we want to share the bucket so the images can be accessed from the web. For this you can run the following command:
uplink share --dns <domain> sj://<bucket>/<prefix> --not-after=none
Replace <domain>
with the domain you want to serve the images from. In my case I use media.tiim.ch
. Then replace <bucket>
with the name of your bucket and <prefix>
with the prefix.
As mentioned above, you can think of a prefix as a folder. If you use for example media-site1
as a prefix, then every file in the "folder" media-site1
will be shared. This means you can use multiple prefixes to serve files for multiple websites in the same bucket.
You will get the following output:
[...]
=========== DNS INFO =====================================================================
Remember to update the $ORIGIN with your domain name. You may also change the $TTL.
$ORIGIN example.com.
$TTL 3600
media.example.com IN CNAME link.storjshare.io.
txt-media.example.com IN TXT storj-root:mybucket/myprefix
txt-media.example.com IN TXT storj-access:totallyrandomstringofnonsens
Create the DNS entries in Cloudflare with the values printed in the last three lines. Make sure you enable the proxy setting when entering the CNAME entry to enable Cloudflares CDN service.
And that's it. All files you put in the bucket with the correct prefix are now available under your domain! :)
If this blog post helped you, or you have some issues or thoughts on this, leave a comment via the comment box below or via webmention.
]]>A few weeks ago, I stumbled on one of Jamie Tanna's blog posts about microformats2 by accident. That is when I first learned about the wonderful world of the IndieWeb. It took me a while to read through some of the concepts of the IndieWeb like webmentions, IndieAuth, microformats and all the other standards, but the more I found out about it the more I wanted to play around with it. And what better place to try out new technology than on a personal website?
I will start with a brief introduction for the uninitiated. If you have already heard about the IndieWeb, feel free to skip to the next section.
The IndieWeb is a collection of standards, intending to make the web social, without the user giving up ownership of their data. While on social media platforms (or as called in IndieWeb terms: silos) you can easily communicate with others, you are always subject to the whims of those platforms.
The IndieWeb wants to solve this by defining standards that, once implemented in a website, allow it to communicate with other websites that are also part of the IndieWeb.
The most important concept of the IndieWeb is, you have control over your data. All of your shared data lives on a domain you control.
Some of the standards in the IndieWeb include:
As explained in my earlier post First Go Project: A Jam-stack Commenting API, my website is a statically built SvelteKit app hosted on GitHub Pages. This means the most important part of the IndieWeb is already implemented: I own this domain and post my content here.
As mentioned above, the microformats2 standard allows websites to encode data about the page in a machine-readable format. This is accomplished by annotating HTML elements with some predefined class names. For example, the microformat for a blog post, note and other content is called h-entry. By adding the h-entry
class to a div, its content is marked as belonging to that post. Children of this div can in turn have other microformat elements such as p-name
, p-author
or dt-published
.
While these CSS classes make the data machine-interpretable, the same data is still available to the user. There is no duplication like for example the meta tags in OpenGraph.
Since my page is a custom SvelteKit app, it was easy enough to add the CSS classes to the right places. I even took the opportunity to add some more information to the pages, like the author card you see if you scroll to the bottom of this post.
The standard I wanted to play around with the most are webmentions. A webmention is a sort of notification sent from one website A to another website B, telling B that A has a page linking to it.
In the IndieWeb all types of interactions are just web pages. The microformats2 specification for example allows replies, quotes, likes, bookmarks and many other types of interactions. The receiver of the webmention is free to extract any relevant information from the sender page and might display it, for example as a comment.
Since I already have a small custom service running for the comment section on this site, I decided to add support to it for receiving webmentions. I refactored the comment system quite a bit to make it more modular and extendable, to allow me to add webmentions
It currently supports all the required and some optional features for receiving webmentions: The first thing it does is validate the mention. A mention is only valid if the source and target URLs are valid and if the page from the source URL links to the target URL. The next step is extracting some microformat content from the source URL and saving it to the database. I found some things unexpectedly tricky to implement: for example, a repeated webmention with the same source URL should update the previously saved webmention if the link to the target page is still there, but delete the webmention if the link was removed.
I have tested my webmentions implementation using webmention.rocks, but I would appreciate it if you left me a mention as well 😃
The next thing I wanted to add to my website was sending webmentions. But before I implemented that, I wanted a way to publish short content without spamming my blog feed. For this, I created a new post type called notes. The list of notes lives on the /mf2 page because I plan to mostly use it to publish notes that contain microformats2 classes such as replies and likes. Another reason I didn't want to make it accessible as the /notes page is that I plan to publish my Zettelkasten notes eventually, but this is a story for another post.
I also used the opportunity to add an RSS feed for all my posts, pages, projects, and notes: full-rss.xml. I do not recommend you subscribe to it unless you are curious about all changes to the content on my website.
Sending webmentions was easy compared to receiving webmentions:
On a regular interval (and on page builds), the server loads the full RSS feed and checks what items have a newer timestamp than the last time. It then extracts a list of all URLs from that feed item and loads the list of URLs that it extracted last time. Then a webmention is sent to all the URLs.
Luckily I did not have to implement any of this myself apart from some glue code to fit it together: I used the library gocron for scheduling the regular intervals, gofeed for parsing the RSS feed and webmention for extracting links and sending webmentions.
The next thing on my roadmap is implementing IndieAuth. Although not because I have a real use case for it, but because I'm interested in OAuth, the underlying standard, and this seems like a good opportunity to get a deeper understanding of the protocol.
Although, before I start implementing the next things, I should probably focus on writing blog posts first. There is no use in the most advanced blogging system if I can't be bothered to write anything.
This post is also on
In this blog post, I will explain why server-side rendering with the urql GraphQL library is not as straightforward to do with SvelteKit, and how I solved this in my project anyway.
Server-side rendering (SSR) is one of the great features of SvelteKit. I will try to keep this blog post short and will therefore not explain what server-side rendering is and why you should take advantage of it (you really should!). If you want to know more about SSR you can take a look at this article: A Deep Dive into Server-Side Rendering (SSR) in JavaScript.
SvelteKit implements SSR by providing a load
function for every layout and page component. If a page or layout needs to perform some asynchronous operation, this should be done inside of this load function. SvelteKit executes this function asynchronously on the server side as well as on the client side and the return value of this function is assigned to the data
prop of the associated component. Usually, this asynchronous operation is loading data from an external service, like in the case of this blog post a GraphQL server.
You can of course load data directly in the component, but SvelteKit will not wait for this to complete when doing SSR, and the resulting HTML will not include the loaded data.
The urql library allows us to easily issue GraphQL queries and mutations. Some of the functionality it has to make our lives easier include:
We want to keep these features, even when using urql when doing SSR.
When implementing SSR in my project, I ran into two problems. I couldn't find any documentation or any articles solving them, so I decided to write down my solutions to those problems in this blog post.
Let's say we have the following load function, which executes a GraphQL query to load a list of red cars:
// src/routes/car/+page.js
/** @type {import('./$types').PageLoad} */
export function load(event) {
const client = createClient({
url: config.url,
fetch: event.fetch,
});
const carColor = "red";
const cars = client
.query(carsQuery, {
color: carColor,
})
.toPromise()
.then((c) => c.data?.car);
return {
cars,
};
}
This example uses the urql method client.query
to start a query to get us a list of cars with a red colour (The GraphQL query is not shown but the exact query is not important for this example).
The client gets a special fetch function from the event which has a few nice properties, like preventing a second network request on the client side if that same request was just issued on the server-side.
Since the query code is now located in the load function and not in a svelte component, there is no way to easily change the carColor
and have urql automatically reload the query. The only way to change the variable is to set the value as a query parameter and read that from the event
argument. This however means that we have to refresh the whole page just to reload this query.
The other thing urql does for us, reloading the query when we do a mutation on the same data, will not work with the above code either.
To fix those two drawbacks we have to add the same query as in the load function to our component code as well. Unfortunately, this means when a user loads the page, it sends a request from the client side, even though the same request got sent from the server side already.
I created a small wrapper function queryStoreInitialData
that creates the query inside of the component and intelligently switches from the (possibly stale) data from the load function to the new data. Using this wrapper, the page or layout might look as follows:
<script>
import { queryStoreInitialData } from "@/lib/gql-client"; // The helper function mentioned above
import { getContextClient } from "@urql/svelte";
import { carsQuery } from "./query"; // The query
export let data;
$: gqlStore = queryStoreInitialData(
{
client: getContextClient(),
query: carsQuery,
},
data.cars
);
$: cars = $gqlStore?.data?.car;
</script>
<div>
<pre>
{JSON.stringify(cars, null, 2)}
</pre>
</div>
queryStore
function gets replaced with the wrapper function.Unfortunately, we can not return the query result from the load function directly like this:
const result = await client.query(cars, {}).toPromise();
return {
cars: toInitialValue(result),
};
This results in the following error:
Cannot stringify a function (data.events.operation.context.fetch)
Error: Cannot stringify a function (data.events.operation.context.fetch)
at render_response (file:///app/node_modules/@sveltejs/kit/src/runtime/server/page/render.js:181:20)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async render_page (file:///app/node_modules/@sveltejs/kit/src/runtime/server/page/index.js:276:10)
at async resolve (file:///app/node_modules/@sveltejs/kit/src/runtime/server/index.js:232:17)
at async respond (file:///app/node_modules/@sveltejs/kit/src/runtime/server/index.js:284:20)
at async file:///app/node_modules/@sveltejs/kit/src/exports/vite/dev/index.js:406:22
This is because the query result contains data that is not serializable.
To fix this I created the toInitialValue
function, which deletes all non-serializable elements from the result. The load function now looks like follows;
// src/routes/car/+page.js
import { createServerClient, toInitialValue } from "@/lib/gql-client";
import { parse } from "cookie";
import { carsQuery } from "./query";
/** @type {import('./$types').PageServerLoad} */
export const load = async (event) => {
const client = createClient({
url: config.url,
fetch: event.fetch,
});
const result = await client.query(cars, {}).toPromise();
return {
cars: toInitialValue(result),
};
};
We will look at the same load
function as #Problem 1 - Svelte and urql Reactivity: the function creates a urql client with the fetch function from the event object and uses this client to send a query.
Sometimes however the GraphQL API requires authentication in the form of a cookie to allow access.
Unfortunately, the fetch function that we get from the load event will only pass the cookies on if the requested domain is the same as the base domain or a more specific subdomain of it. This means if your SvelteKit site runs on example.com
and your GraphQL server runs on gql.example.com
then the cookies will get forwarded and everything is fine. This however is, in my experience, often not the case. Either you might use an external service for your GraphQL API or you host it yourself and want to use its internal domain.
The only way to pass the cookies on to the GraphQL server, in this case, is by manually setting the cookie header when creating the urql client. This however forces us to use the server-only load function, as we do not have access to the cookie header in the normal load function.
The new code now looks like this:
// /src/routes/car/+page.server.js
/** @type {import('./$types').PageServerLoad} */
export function load(event) {
const client = createClient({
url: config.url,
fetch,
fetchOptions: {
credentials: "include",
headers: {
// inject the cookie header
// FIXME: change the cookie name
Cookie: `gql-session=${event.cookies.get("gql-session")}`,
},
},
});
const cars = client.query(carsQuery, {}).toPromise();
return {
cars: toInitialValue(result),
};
}
To keep the size of the load functions across my codebase smaller I created a small wrapper function createServerClient
:
// /src/routes/car/+page.server.js
/** @type {import('./$types').PageServerLoad} */
export function load(event) {
const client = createServerClient(event.cookies);
const cars = client.query(carsQuery, {}).toPromise();
return {
cars: toInitialValue(result),
};
}
Below you can find the three functions createServerClient
, queryStoreInitialData
and toInitialValue
that we used above:
// /src/lib/gql-client.js
import { browser } from "$app/environment";
import { urls } from "@/config";
import { createClient, queryStore } from "@urql/svelte";
import { derived, readable } from "svelte/store";
/**
* Helper function to create an urql client for a server-side-only load function
*
*
* @param {import('@sveltejs/kit').Cookies} cookies
* @returns
*/
export function createServerClient(cookies) {
return createClient({
// FIXME: adjust your graphql url
url: urls.gql,
fetch,
// FIXME: if you don't need to authenticate, delete the following object:
fetchOptions: {
credentials: "include",
headers: {
// FIXME: if you want to set a cookie adjust the cookie name
Cookie: `gql-session=${cookies.get("gql-session")}`,
},
},
});
}
/**
* Helper method to send a GraphQL query but use the data from the SvelteKit load function initially.
*
*
* @param {any} queryArgs
* @param {any} initialValue
* @returns
*/
export function queryStoreInitialData(queryArgs, initialValue) {
if (!initialValue || (!initialValue.error && !initialValue.data)) {
throw new Error("No initial value from server");
}
let query = readable({ fetching: true });
if (browser) {
query = queryStore(queryArgs);
}
return derived(query, (value, set) => {
if (value.fetching) {
set({ ...initialValue, source: "server", fetching: true });
} else {
set({ ...value, source: "client" });
}
});
}
/**
* Make the result object of a urql query serialisable.
*
*
* @template T
* @param {Promise<import('@urql/svelte').OperationResult<T, any >>|import('@urql/svelte').OperationResult<T, any >} result
* @returns {Promise<{fetching:false, error: undefined | {name?: string, message?: string; graphQLErrors?: any[]; networkError?: Error; response?: any;}, data: T|undefined}>}
*/
export async function toInitialValue(result) {
const { error, data } = await result;
// required to turn class array into array of javascript objects
const errorObject = error ? {} : undefined;
if (errorObject) {
console.warn(error);
errorObject.graphQLErrors = error?.graphQLErrors?.map((e) => ({ ...e }));
errorObject.networkError = { ...error?.networkError };
errorObject.response = { value: "response omitted" };
}
return {
fetching: false,
error: { ...error, ...errorObject },
data,
};
}
Even though I think this solution is not too bad, I wish @urql/svelte would implement a better way to handle SSR with sveltekit. I posted a question on the urql GitHub discussions board, but I have not gotten any response yet.
InfoThis article was written with
@svelte/kit
version1.0.0-next.499
and@urql/svelte
version3.0.1
. I will try to update this article as I update my codebase to newer versions.
If this post helped you, or you found a better or different way to solve SSR with urql, please let me know in the comments, write me an email or tag me on twitter @TiimB.
]]>I recently have been looking around for a simple commenting system to integrate into my website. Since my website is a pre-rendered static Html site hosted on Github Pages, there is no way for it to directly store comments because it does not have a database. The only option for dynamic content to be stored is with an external service.
I kept my eyes open for a service that I liked, but I did not want to just integrate any old service into my website, I did have some requirements:
While looking around for how other people integrated comments into their static websites, I found a nice blog post from Average Linux User which compares a few popular commenting systems. Unfortunately, most systems either are not very privacy-friendly, cost money or store the comments as comments on Github issues..? After looking through the options I decided to use this opportunity to write my own commenting system and dabble with the Go programming language.
First thing first, if you want to take a look at the code, check out the Github repo.
I decided to write the commenting system in Go because I have been looking for an excuse to practice Go for a while, and this seemed like the perfect fit. It is a small CRUD app, consisting of a storage component, an API component and a small event component in the middle to easily compose the functionality I want.
Currently, it supports the following functionality:
The code is built in a way to make it easy to customise the features. For example to disable features like the email reply notifications you can just comment out the line in the main.go file that registers that hook.
To write custom hooks that get executed when a new comment gets submitted or one gets deleted, just implement the Handler interface and register it in the main method.
You can also easily add other storage options like databases or file storage by implementing the Store and SubscribtionStore interfaces.
I currently use it on this website! Go test it out (I might delete the comments if they are rude though 🤔).
In all seriousness, I would not use it for a website where the comments are critical. But for a personal blog or similar, I don't see why not.
If you want to host your own version, there is a Dockerfile available. If you decide to integrate this into your website, please comment below, ping me @TiimB or shoot me an email [email protected], I would love to check it out.
]]>I often go to social media to get news about topics that interest me. Be it web development, gardening life hacks or political news, I can follow people or topics that interest me. But instead of reading about those topics, I often get sucked into an endless hole of content that I did not sign up for. Social media companies deliberately do not want you to limit what is shown to you. It would be too easy to leave and not spend your time watching their precious ads.
But there is another way! By subscribing to RSS feeds you are in control of what you are shown. Most websites, blogs, news sites and even social media sites provide RSS feeds to subscribe to. You get only the articles, videos or audio content you are subscribed to, without any algorithm messing with your attention.
RSS stands for "Really Simple Syndication", and it is a protocol for a website to provide a list of content. It is an old protocol, the first version was introduced in 1999, but it might be more useful nowadays than ever. If you listen to podcasts, you are already familiar with RSS feeds: a podcast is an RSS feed which links to audio files instead of online articles. An RSS feed is just an XML document which contains information about the feed and a list of content. When you use an app to subscribe to an RSS feed, this app will just save the URL to the XML document and load it regularly to check if new content is available. You are completely in control of how often the feed is refreshed and what feeds you want to subscribe to. Some RSS reader apps also allow you to specify some rules for example about if you should be notified, based on the feed, the content or the tags.
Since an RSS feed is just an XML document, you don't technically have to subscribe to a feed to read it, you could just open the document and read the XML. But that would be painful. Luckily there are several plugins, apps and services that allow you to easily subscribe to and read RSS feeds.
If you want to start using RSS and are not sure if you will take the time to open a dedicated app, I would recommend using an RSS plugin for another software that you are using regularly. For example, the Thunderbird email client already has built-in RSS support. If you want to read to the feeds directly inside of your browser, you can use the feedbro extension for Chrome, Firefox, and other Chromium-based browsers. I use the Vivaldi browser which comes with an integrated RSS feed reader.
Unfortunately not every website offers an RSS feed. Although it might be worth it to hunt for them. Some websites offer an RSS feed but do not link to it anywhere. If there is no feed, but a newsletter is offered, the service "Kill The Newsletter" will provide you with email addresses and a corresponding RSS URL to convert any newsletter to a feed. Another service to consider is FetchRSS. It turns any website into an RSS feed.
If you want to have a dedicated app for your reading, you're in luck! There is a plethora of apps to choose from, all with different features and user interfaces. There are three main types of apps: standalone apps, service-based apps, and self-hosted apps. Most apps are standalone, meaning they fetch the RSS feeds only when open, and don't sync to your other devices. The service-based apps rely on a cloud service which will fetch the feeds around the clock, even when all your devices are off. They can also send you a summary mail if you forget to check for some time and they can sync your subscriptions across all your devices. Unfortunately, most service-based apps only offer a limited experience for free. The last category is self-hosted apps. They are similar to the service based apps but instead of some company running the service, you have to provide a server for the service to run yourself.
I use a standalone app, because I do not want to rely on a service, but I also don't want to go through the hassle of setting up a self-hosted solution.
If you are still unsure what RSS app you could try out, I provided a list below. Make sure to add the RSS feed for my blog (https://tiim.ch/blog/rss.xml
) to test it out 😉
There are many guides on the internet showing how to set up an SSH server inside WSL. This is currently not that easy and in my experience, it is not really stable. An alternative to this is to run the SSH server outside of WSL on the windows side and set its default shell to the WSL shell (or any other shell for that matter).
Windows has been shipping with an OpenSSH client and server for a long time. They are not installed by default but can be activated either in the settings as described in the official docs or with the following PowerShell commands.
You will need to start PowerShell as Administrator
First, install the OpenSSH client and server.
Add-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0
Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0
Enable the SSH service and make sure the firewall rule is configured:
# Enable the service
Start-Service sshd
Set-Service -Name sshd -StartupType 'Automatic'
# Confirm the firewall rule is configured. It should be created automatically by setup. Run the following to verify
if (!(Get-NetFirewallRule -Name "OpenSSH-Server-In-TCP" -ErrorAction SilentlyContinue | Select-Object Name, Enabled)) {
Write-Output "Firewall Rule 'OpenSSH-Server-In-TCP' does not exist, creating it..."
New-NetFirewallRule -Name 'OpenSSH-Server-In-TCP' -DisplayName 'OpenSSH Server (sshd)' -Enabled True -Direction Inbound -Protocol TCP -Action Allow -LocalPort 22
} else {
Write-Output "Firewall rule 'OpenSSH-Server-In-TCP' has been created and exists."
}
Congratulations, you have installed the SSH server on your Windows machine. And all without manually setting up a background service or modifying config files.
To directly boot into WSL when connecting, we need to change the default shell from cmd.exe
or PowerShell.exe
to bash.exe
, which in turn runs the default WSL distribution. This can be done with the PowerShell command:
New-ItemProperty -Path "HKLM:\SOFTWARE\OpenSSH" -Name DefaultShell -Value "C:\WINDOWS\System32\bash.exe" -PropertyType String -Force
Note: even though the shell is running on the Linux side, the SSH server is still on windows. This means you have to use to windows username to log in, and the SCP command copies files relative to the user directory on windows.
Note: If the user account has Admin permissions, read the next chapter, otherwise continue reading.
Create the folder .ssh
in the users home directory on windows: (e.g. C:\Users\<username>\.ssh
). Run the following commands in PowerShell (not as administrator).
New-Item -Path ~\.ssh -ItmeType "directory"
New-Item -Path ~\.ssh\authorized_keys
The file .ssh\autzorized_keys
will contain a list of all public keys that shall be allowed to connect to the SSH server.
Copy the contents of your public key file (usually stored in ~/.ssh/id_rsa.pub
) to the authorized_keys
file. If a key is already present, paste your key on a new line.
If the user is in the Administrators group, it is not possible to have the authorized_keys
file in the user directory for security purposes.
Instead, it needs to be located on the following path %ProgramData%\ssh\administrators_authorized_keys
. A second requirement is that it is only accessible to Administrator users, to prevent a normal user from gaining admin permissions.
To create the file start PowerShell as administrator and run the following command.
New-Item -Path $env:programdata\ssh\administrators_authorized_keys
This will create the file with the correct permissions. Now open the file and paste your public key into it. The public key should be located at ~/.ssh/id_rsa.pub
. If a key is already present, paste your key on a new line.
Verify that you can SSH into your machine by running the following inside WSL:
IP=$(cat /etc/resolv.conf | grep nameserver | cut -d " " -f2) # get the windows host ip address
ssh <user>@$IP
Or from PowerShell and cmd:
ssh <user>@localhost
There are some drawbacks to this approach. If you rely on some programs or scripts to work over SSH, this might not be the method for you. Most scripts expect a unix machine on the other end, or if they expect a windows machine they will most likely not be configured to deal with WSL.
If you however just want to connect to your pc to copy some files or change some settings this approach is perfectly fine.
]]>Did you ever want to listen to your phone audio on your PC? I do it all the time to listen to podcasts on my PC without paying for a podcast app that syncs the episodes over the cloud. In this short article I will show you two easy ways to do this with a windows PC.
TLDR:
Requirements: A PC with integrated Bluetooth or a Bluetooth dongle.
I recommend this approach more than the wired one because it is way less effort, you don't have to deal with a USB or lightning to audio dongle and in my opinion it is more reliable.
Pair your phone with your PC as normal, by opening the Bluetooth settings on your phone and on the PC and wait for the devices to show up. When you successfully paired the phone once you will not have to do this again. Now you need an app that will tell the phone that it can use the PC as a wireless speaker. The only app I found that will do this is the Bluetooth Audio Receiver app from the Windows Store. Install and and open it. You should see your phone on the list of Bluetooth devices on the app. Select it and click on the Open Connection
button. It might take a moment but after it connected, you should hear all sounds from your phone on your PC.
Requirements:
This approach works if your PC doesn't support Bluetooth, or if the Bluetooth connection drops for some reason. Connect the audio cable to the blue line-in jack on the back of the computer. Then, connect the phone to the other end of the audio cable. If your phone does not have an audio jack, use the adapter on the USB-C or Lightning port. If your PC detects that you connected a new line-in device, it might open the audio settings automatically. If not, right-click on the volume icon on the taskbar next to the clock and select Sounds
. Navigate to the Input
tab and double click on the Line-In entry (the one with a cable icon). Navigate to the Monitor tab and select the check box for "Use this device as a playback source". This will tell windows it should play all sounds received through this input directly to the speakers. Usually this is used to monitor microphones but it works for this use case too. You should now hear any sound from your phone through your PC headphones or speakers. Make sure you turn this checkbox off when you disconnect your phone. Otherwise you might hear a crackle or other sounds when the loose cable gets touched.
Photo by Lisa Fotios from Pexels
]]>Version control systems use a graph data structure to track revisions of files. Those graphs are mutated with various commands by the respective version control system. The goal of this thesis is to formally define a model of a subset of Git commands which mutate the revision graph, and to model those mutations as a planning task in the Planning Domain Definition Language. Multiple ways to model those graphs will be explored and those models will be compared by testing them using a set of planners.
@thesis{bachmann2021,
title = {Modelling Git Operations as Planning Problems},
author = {Tim Bachmann},
year = {2021},
month = {01},
type = {Bachelor's Thesis},
school = {University of Basel},
doi = {10.13140/RG.2.2.24784.17922}
}
]]>Let's say you have a rest API with the following endpoint that returns all of the books in your database:
GET /book/
Your SQL query might look like something like this
SELECT *
FROM books
Sometimes you want to only list books, for example, from a specific author. How do we do this in SQL?
One way would be to concatenate your sql query something like this:
const arguments = [];
const queryString = "SELECT * FROM books WHERE true";
if (authorFilter != null) {
queryString += "AND author = ?";
arguments.push(authorFilter);
}
db.query(queryString, arguments);
I'm not much of a fan of manually concatenating strings.
Most Databases have the function coalesce
which accepts a variable amount of arguments and returns the first argument that is not null.
-- Examle
SELECT coalesce(null, null, 'tiim.ch', null, '@TiimB') as example;
-- Will return
example
---------
tiim.ch
But how will this function help us?
SELECT *
FROM books
WHERE
author = coalesce(?, author);
If the filter value is null the coalesce expression will resolve to author
and the comparison author = author
will be true.
If on the other hand the value is set for example to Shakespeare then the author will be compared to Shakespeare.
I came across this way to implement optional filters only recently. If you have a more idiomatic way to do this let me know please ✨
If you liked this post please follow me on here or on Twitter under @TiimB 😎
]]>I recently read the Article Serving Vue.js apps on GitHub Pages and it inspired me to write about what I'm doing differently.
If you want to see an example of this method in action, go check out my personal website on GitHub
I won't be explaining how to setup a Vue project. If you're looking for a Tutorial on that go check out the awesome Vue.js Guide.
So you have setup your awesome Vue project and want to host it on GitHub Pages. The way Muhammad explained it you would build the project using npm run build
, commit the dist/
folder along with your source files and point GitHub to the dist folder. This might get quite messy because you either have commit messages with the sole purpose of uploading the dist folder or you commit the code changes at the same time which makes it hard to find the relevant changes if you ever want to look at your commits again.
So what can you do about this?
Git to the rescue, let's use a branch that contains all the build files.
To make sure that the branch we are working from stays clean of any build files we are gonna add a .gitignore
file to the root.
# .gitignore
dist/
We are not goint to branch off master like how we would do it if we were to modify our code with the intention to merge it back to the main branch. Instead we are gonna create a squeaky clean new branch that will only ever hold the dist files. After all we will not ever need to merge these two branches together.
We do this by creating a new git repository inside the dist folder:
cd dist/
git init
git add .
git commit -m 'Deploying my awesome vue app'
We are gonna force push our new git repository to a branch on GitHub. This might go against git best practices but since we won't ever checkout this branch we don't have to worry about that.
git push -f [email protected]:<username>/<repo>.git <branch>
⚠️ Make sure you double or tripple check your destination branch! You don't want to accidentally overwrite your working branch. Using the branch gh-pages
will most likely be a good idea.
Now we are almost done. The only thing left is telling GitHub where our assets live.
Go to your repo, on the top right navigate to Settings
and scroll down to GitHub pages. Enable it and set your source branch to the branch you force pushed to, for example gh-pages
.
If you don't mind doing this whole process (Step 2 and 3) every time you want to deploy you can stop now. If you're as lazy as me, here is the script I use to deploy with one command:
# deploy.sh
#!/usr/bin/env sh
# abort on errors
set -e
# build
echo Linting..
npm run lint
echo Building. this may take a minute...
npm run build
# navigate into the build output directory
cd dist
# if you are deploying to a custom domain
# echo 'example.com' > CNAME
echo Deploying..
git init
git add -A
git commit -m 'deploy'
# deploy
git push -f [email protected]:<username>/<repo>.git <branch>
cd -
If your on windows look into the Windows Subsystem for Linus (WSL) it will be worth it.
If you are still reading, thank you very much. This is actually my first article and I'm really happy to hear about any opinions and criticisms. Happy Coding ♥
]]>