Poeple seem to have a love-hate relationship with NixOS. The Curse of NixOS is an excellent article by Wesley Aptekar-Cassels, which tells the good and the bad of NixOS, and serves as a great reading for background on this article.
Neither am I a user of NixOS, nor I want to be. Still, I got stormed by issues on GitHub from NixOS users, which can be summarized by a quote from NixOS Wiki:
Downloading and attempting to run a binary on NixOS will almost never work.
Here is a demonstration:
[nix-shell:~]# ./dart --version
bash: ./dart: No such file or directory
[nix-shell:~]# ls -l dart
-rwxr-xr-x 1 root root 5154328 May 31 23:59 dart
Running the dart
executable downloaded from dart.dev would fail in a very confusing way on NixOS. At a glance it may look like bash
is complaining about file ./dart
does not exist. However, checking with ls
would reject that idea immediately. This is why many NixOS users get lost, where they have no choice but to seek help from the developer. It is pretty common for developers to have no cue given the limited information that appear to be contradictory. I don’t blame puzzled NixOS users, as I would probably have no idea without my previous experience of dealing with three different libc on a single Linux system.
Linux executable files are usually in Executable and Linking Format (ELF). For dynamically linked programs, the loading of shared libraries is performed by ld-linux.so
, the dynamic linker, whose location is hard coded in PT_INTERP
program header. Most of the Linux distributions have the dynamic linker in a common location, so that programs compiled on one distribution can run on other distributions, as long as all dependencies are satisfied.
[nix-shell:~]# ldd dart
linux-vdso.so.1 (0x00007ffd53b6b000)
libdl.so.2 => /nix/store/4nlgxhb09sdr51nc9hdm8az5b08vzkgx-glibc-2.35-163/lib/libdl.so.2 (0x00007f6b42fc7000)
libpthread.so.0 => /nix/store/4nlgxhb09sdr51nc9hdm8az5b08vzkgx-glibc-2.35-163/lib/libpthread.so.0 (0x00007f6b42fc2000)
libm.so.6 => /nix/store/4nlgxhb09sdr51nc9hdm8az5b08vzkgx-glibc-2.35-163/lib/libm.so.6 (0x00007f6b42ee2000)
libc.so.6 => /nix/store/4nlgxhb09sdr51nc9hdm8az5b08vzkgx-glibc-2.35-163/lib/libc.so.6 (0x00007f6b42cd9000)
/lib64/ld-linux-x86-64.so.2 => /nix/store/4nlgxhb09sdr51nc9hdm8az5b08vzkgx-glibc-2.35-163/lib64/ld-linux-x86-64.so.2 (0x00007f6b43677000)
[nix-shell:~]# ls -l /lib64/ld-linux-x86-64.so.2
ls: cannot access '/lib64/ld-linux-x86-64.so.2': No such file or directory
NixOS is different that the ld-linux.so
does not exist at its common location hence the “no such file or directory” error. This issue comes from a deliberate decision from Nix to ignore the Filesystem Hierarchy Standard (FHS), and NixOS created a tool called patchelf to rectify the consequences.
[nix-shell:~]# patchelf --set-interpreter /nix/store/4nlgxhb09sdr51nc9hdm8az5b08vzkgx-glibc-2.35-163/lib64/ld-linux-x86-64.so.2 ./dart
[nix-shell:~]# ldd dart
linux-vdso.so.1 (0x00007fff717bc000)
libdl.so.2 => /nix/store/4nlgxhb09sdr51nc9hdm8az5b08vzkgx-glibc-2.35-163/lib/libdl.so.2 (0x00007f523a6ed000)
libpthread.so.0 => /nix/store/4nlgxhb09sdr51nc9hdm8az5b08vzkgx-glibc-2.35-163/lib/libpthread.so.0 (0x00007f523a6e8000)
libm.so.6 => /nix/store/4nlgxhb09sdr51nc9hdm8az5b08vzkgx-glibc-2.35-163/lib/libm.so.6 (0x00007f523a608000)
libc.so.6 => /nix/store/4nlgxhb09sdr51nc9hdm8az5b08vzkgx-glibc-2.35-163/lib/libc.so.6 (0x00007f523a3ff000)
/nix/store/4nlgxhb09sdr51nc9hdm8az5b08vzkgx-glibc-2.35-163/lib64/ld-linux-x86-64.so.2 (0x00007f523ada6000)
[nix-shell:~]# ./dart --version
Dart SDK version: 2.19.6 (stable) (Tue Mar 28 13:41:04 2023 +0000) on "linux_x64"
Once PT_INTERP
program header is modified to the correct location for NixOS, the dart
executable now works as expected. However, even with the tools being available, it’s still a trouble for developers who distribute precompiled Linux binaries. On NixOS, the location of ld-linux.so
changes every time glibc is updated, therefore distributing already modified ELF is unreliable. Patching during at the runtime is also undependable as patchelf
may not be available.
ld-linux.so
The dynamic linker is a shared library, but it is a special runnable one, which can be explicitly invoked to execute an ELF.
[nix-shell:~]# ./dart --version
bash: ./dart: No such file or directory
[nix-shell:~]# /nix/store/4nlgxhb09sdr51nc9hdm8az5b08vzkgx-glibc-2.35-163/lib64/ld-linux-x86-64.so.2 ./dart --version
Dart SDK version: 2.19.6 (stable) (Tue Mar 28 13:41:04 2023 +0000) on "linux_x64"
The unmodified dart
executable works on NixOS when explicitly invoked through ld-linux.so
. As for finding the uncertain location of the dynamic linker on NixOS, it can be done by reading the program header from /proc/self/exe
.
Normally, when facing this kind of issues, NixOS users would have to face the challenge of either modifying downloaded programs or building programs from source, which the vast majority of people using NixOS are not familiar with. Even after getting an offical repackaging in Nixpkgs, the compliants from NixOS users would not stop, as many users may still choose to download instead.
A radical solution was born to put a stop:
# Try to execute ELF.
begin
exec(*COMMAND, *ARGV)
# Catch the "no such file or directory" error.
rescue Errno::ENOENT
# Locate `ld-linux.so` by parsing `/proc/self/exe`.
# See: https://github.com/sass-contrib/sass-embedded-host-ruby/blob/main/lib/sass/elf.rb
require_relative 'elf'
# Rethrow if `ld-linux.so` cannot be located.
raise if ELF::INTERPRETER.nil?
# Invoke `ld-linux.so` to execute ELF.
exec(ELF::INTERPRETER, *COMMAND, *ARGV)
end
The solution above was shipped as part of the sass-embedded gem. It has worked for a while, until NixOS managed to break it with another innovative non-standard behavior in 24.05 release:
NixOS now installs a stub ELF loader that prints an informative error message when users attempt to run binaries not made for NixOS.
In the past, there was either a properly working ld-linux.so
or none at its standard location. Now in NixOS a stub-ld
that does nothing but immediately exits with an error message is installed at the standard location of ld-linux.so
, so that ENOENT
is no more when calling exec
. The exec
system call will always succeed in running stub-ld
, the stub-ld
will always fail, and the actual foreign binary will never execute. To avoid stub-ld
altogether, the command must be modified to begin with a real ld-linux.so
, before even calling exec
.
A more radical solution was born:
# Locate `ld-linux.so` by parsing `/proc/self/exe`.
# See: https://github.com/sass-contrib/sass-embedded-host-ruby/blob/main/lib/sass/elf.rb
require_relative '../../lib/sass/elf'
module Sass
module CLI
# Preparsed `ld-linux.so` of the foreign binary, different for each platform
INTERPRETER = '/lib/ld-linux-aarch64.so.1'
# The last component of the preparsed `ld-linux.so`
INTERPRETER_SUFFIX = '/ld-linux-aarch64.so.1'
# Prepend the `ld-linux.so` of `/proc/self/exe` if it differs from the `ld-linux.so` of the foreign binary
# yet shares the same filename indicating that they are compatible.
COMMAND = [
*(ELF::INTERPRETER if ELF::INTERPRETER != INTERPRETER && ELF::INTERPRETER&.end_with?(INTERPRETER_SUFFIX)),
File.absolute_path('dart-sass/src/dart', __dir__).freeze,
File.absolute_path('dart-sass/src/sass.snapshot', __dir__).freeze
].freeze
end
private_constant :CLI
end
It’s unlikely that NixOS will be able to break it again. I hope.
]]>When Pro Display XDR is connected via the 40 Gbit/s Thunderbolt 3, it uses dual-link SST DisplayPort in High Bit Rate 3 (HBR3) mode. A bandwidth of 36.64 Gbit/s is required for transmitting uncompressed 6K 60Hz 10-bit HDR video. 36.64 Gbit/s greatly exceeded the 20 Gbit/s of USB 3.2 Gen 2×2, thus it is not possible to transmit the video uncompressed over USB 3.2 Gen 2×2.
However, it is lesser-known that Apple’s Pro Display XDR can actually run at its full 6016 × 3384, 60 Hz
with 10-bit HDR, using under 20 Gbit/s of USB 3.2 Gen 2×2. DSC (Display Stream Compression), a visually lossless compression standard, is the key technology that made it possible.
To deliver resolution higher than 4K 60Hz over USB 3.2 Gen 2×2, both the display and the video graphics card must support DSC. With 10-bit HDR (30 bpp) compressed to 12 bpp at 2.5:1 compression ratio, the bandwidth required to deliver compressed 6K 60Hz 10-bit HDR video is only 14.66 Gbit/s.
When Pro Display XDR is connected via USB 3.2 Gen 2×2, it uses single-link SST DisplayPort in High Bit Rate 2 (HBR2) mode. The required bandwidth of 14.66 Gbit/s for transmitting compressed 6K 60Hz 10-bit HDR video is less than the 17.28 Gbit/s of HBR2. USB-C DisplayPort Alternate Mode does not employ USB 2.0 lanes, so DisplayPort Alternate Mode can be used together with USB 2.0 for video and data transmission via a single cable.
From Pro Display XDR Technology Overview:
Pro Display XDR requires a GPU capable of supporting DisplayPort 1.4 with Display Stream Compression (DSC) and Forward Error Correction (FEC), or a GPU supporting DisplayPort 1.4 with HBR3 link rate and Thunderbolt Titan Ridge for native 6K resolution.
The key to get Pro Display XDR to work with USB 3.2 Gen 2×2 is to use a correct adapter or cable.
For video graphics cards that have a built-in USB-C VirtualLink port, like Nvidia GeForce RTX 20 series or AMD Radeon RX 6000 series, full-featured USB-C to USB-C cables certified for 20 Gbit/s should work. However, do not use active Thunderbolt cables like the Thunderbolt 3 Pro Cable which comes with Pro Display XDR, because active Thunderbolt cables are not backward compatible with USB 3.
For video graphics cards without a built-in USB-C VirtualLink port, a bidirectional USB-C to DisplayPort adapter or cable is needed. In additional, in order to use the USB hub on the back of Pro Display XDR, adjust brightness, or change display preset, an adapter or cable that supports USB 2.0 lanes is required.
The best DisplayPort to USB-C adapters and cables supporting both DisplayPort Alternate Mode and USB 2.0 channel are the following:
Boot Camp drivers need to be installed to adjust brightness or change display preset under Windows.
Normally Apple Boot Camp would refuse to install on non-Apple devices. To install Boot Camp on any Windows PC, start the installer from Command Prompt:
msiexec /i BootCamp.msi
With the right graphics card, the right cable, and right drivers, Pro Display XDR can deliver brilliant 6K 60Hz 10-bit HDR with its full functionality on any Windows PC.
]]>Crostini, a.k.a. Linux on Chrome OS, runs a virtual machine named termina
. Inside termina
, a container named penguin
running under lxc
is exposed to users via the Terminal app.
The penguin
container is based on Debian. The Kubic project provide podman
packages for Debain.
curl -fsSL "https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_$(. /etc/os-release && echo "$VERSION_ID")/Release.key" | sudo gpg --dearmor --yes -o /usr/share/keyrings/kubic-libcontainers-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/kubic-libcontainers-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_$(. /etc/os-release && echo "$VERSION_ID")/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
sudo apt update -qq
sudo apt install -qq -y podman buildah skopeo
Unfortunately, podman
does not function properly out of box in Crostini. Below are a few common issues and how to fix them.
Error: mount `proc` to '/proc': Operation not permitted: OCI permission denied
This error is due to the following lxc
config for penguin
container:
config:
security.nesting: "false"
The solution is to set lxc
config security.nesting
to "true"
.
ERRO[0000] cannot find UID/GID for user linuxbrew: No subuid ranges found for user "linuxbrew" in /etc/subuid - check rootless mode in man pages.
WARN[0000] using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding sub*ids
This error is due to /etc/subuid
and /etc/subgid
missing entry for the current user.
The solution is to add a range for current user in /etc/subuid
and /etc/subgid
.
Error: kernel does not support overlay fs: unable to create kernel-style whiteout: operation not permitted
This error is due to Linux kernel in Crostini does not have OverlayFS support.
The solution is to use btrfs
storage driver.
Open Google Chrome, press Ctrl + Alt + T to get the crosh
shell, and run:
vsh termina
Once inside the termina
virtual machine shell, run:
lxc config set penguin security.nesting true
lxc restart penguin
lxc exec penguin -- /bin/sh -c "printf '%s\n' '1000:100000:65536' | tee /etc/subuid /etc/subgid"
lxc exec penguin -- /bin/sed -i -e 's/^driver[[:space:]]*=.*$/driver = "btrfs"/' /etc/containers/storage.conf
lxc exec penguin -- /bin/rm -rf /var/lib/containers/storage
Now, rootless podman
should work in Crostini!
SECURE256:+SECURE128:-VERS-TLS1.0:-VERS-TLS1.1:-VERS-DTLS1.0:-AES-128-CBC:-AES-128-CCM:-AES-256-CBC:-AES-256-CCM:-RSA:-SHA1
GnuTLS manual - priority strings
$ gnutls-cli --priority SECURE256:+SECURE128:-VERS-TLS1.0:-VERS-TLS1.1:-VERS-DTLS1.0:-AES-128-CBC:-AES-128-CCM:-AES-256-CBC:-AES-256-CCM:-RSA:-SHA1 --list
Cipher suites for SECURE256:+SECURE128:-VERS-TLS1.0:-VERS-TLS1.1:-VERS-DTLS1.0:-AES-128-CBC:-AES-128-CCM:-AES-256-CBC:-AES-256-CCM:-RSA:-SHA1
TLS_AES_256_GCM_SHA384 0x13, 0x02 TLS1.3
TLS_CHACHA20_POLY1305_SHA256 0x13, 0x03 TLS1.3
TLS_AES_128_GCM_SHA256 0x13, 0x01 TLS1.3
TLS_ECDHE_ECDSA_AES_256_GCM_SHA384 0xc0, 0x2c TLS1.2
TLS_ECDHE_ECDSA_CHACHA20_POLY1305 0xcc, 0xa9 TLS1.2
TLS_ECDHE_ECDSA_AES_128_GCM_SHA256 0xc0, 0x2b TLS1.2
TLS_ECDHE_RSA_AES_256_GCM_SHA384 0xc0, 0x30 TLS1.2
TLS_ECDHE_RSA_CHACHA20_POLY1305 0xcc, 0xa8 TLS1.2
TLS_ECDHE_RSA_AES_128_GCM_SHA256 0xc0, 0x2f TLS1.2
TLS_DHE_RSA_AES_256_GCM_SHA384 0x00, 0x9f TLS1.2
TLS_DHE_RSA_CHACHA20_POLY1305 0xcc, 0xaa TLS1.2
TLS_DHE_RSA_AES_128_GCM_SHA256 0x00, 0x9e TLS1.2
Protocols: VERS-TLS1.3, VERS-TLS1.2, VERS-DTLS1.2
Ciphers: AES-256-GCM, CHACHA20-POLY1305, AES-128-GCM
MACs: AEAD
Key Exchange Algorithms: ECDHE-ECDSA, ECDHE-RSA, DHE-RSA
Groups: GROUP-SECP384R1, GROUP-SECP521R1, GROUP-FFDHE8192, GROUP-SECP256R1, GROUP-X25519, GROUP-X448, GROUP-FFDHE2048, GROUP-FFDHE3072, GROUP-FFDHE4096, GROUP-FFDHE6144
PK-signatures: SIGN-RSA-SHA384, SIGN-RSA-PSS-SHA384, SIGN-RSA-PSS-RSAE-SHA384, SIGN-ECDSA-SHA384, SIGN-ECDSA-SECP384R1-SHA384, SIGN-RSA-SHA512, SIGN-RSA-PSS-SHA512, SIGN-RSA-PSS-RSAE-SHA512, SIGN-ECDSA-SHA512, SIGN-ECDSA-SECP521R1-SHA512, SIGN-RSA-SHA256, SIGN-RSA-PSS-SHA256, SIGN-RSA-PSS-RSAE-SHA256, SIGN-ECDSA-SHA256, SIGN-ECDSA-SECP256R1-SHA256, SIGN-EdDSA-Ed25519, SIGN-EdDSA-Ed448
kEECDH+aECDSA:kEECDH+aRSA:kEDH+aRSA:-COMPLEMENTOFDEFAULT:-SSLv3:-TLSv1.0:-SHA256:-SHA384
TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256
$ openssl ciphers -v kEECDH+aECDSA:kEECDH+aRSA:kEDH+aRSA:-COMPLEMENTOFDEFAULT:-SSLv3:-TLSv1.0:-SHA256:-SHA384 | column -t
TLS_AES_256_GCM_SHA384 TLSv1.3 Kx=any Au=any Enc=AESGCM(256) Mac=AEAD
TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256) Mac=AEAD
TLS_AES_128_GCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESGCM(128) Mac=AEAD
ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD
ECDHE-ECDSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=ECDSA Enc=CHACHA20/POLY1305(256) Mac=AEAD
ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(128) Mac=AEAD
ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD
ECDHE-RSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=RSA Enc=CHACHA20/POLY1305(256) Mac=AEAD
ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD
DHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(256) Mac=AEAD
DHE-RSA-CHACHA20-POLY1305 TLSv1.2 Kx=DH Au=RSA Enc=CHACHA20/POLY1305(256) Mac=AEAD
DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(128) Mac=AEAD
From Darling Fireball:
Wrong-headed developers want to use [ ] like [ ] because they think it’ll save them time and resources, but if they want to do it right — and good [ ] is most certainly part of doing it right — they’re making things harder on themselves. What they should admit openly is that they don’t care about doing it right, and in many cases are trying to cover up for the fact that they don’t know how to do it right.
Lovely wisdom right there.
]]>Only a few days after the DDoS attack on Greatfire.org began, Google and Mozilla posted about China Internet Network Information Center (CNNIC) issued a certificate for man-in-the-middle attack. Then, yesterday, the Large Scale DDoS Attack on GitHub began, attempting to take down Greatfire’s accounts on GitHub.
All these attacks just gave me more consern over cybersecurity, especially against man-in-the-middle attack from China. Thus, I created security-trust-settings-tools, a tool set to make it really easy to blacklist all user untrusted certificates on OS X.
To blacklist all common Chinese SSL Certificates with my tools, simply try the OS X version of RevokeChinaCerts.
]]>冬過ぎて
揺る揺る柳
花吹雪
月光を浴びる
酒飲む一人
Made by developer and for developers, github:buttons used a very different approach comparing to other github buttons service. It was designed with the flexibility to customize almost everything including link, text, icon, and count. Unlike the widely used mdo/github-buttons, in this pixel perfect ntkme/github-buttons, you will never worry about iframe sizing and overflowing.
Twitter, Facebook, Google+ … All buttons services from those big vendors are based on iframe. Why? Because iframe is protected by same-origin policy, which means it won’t be affected by stylesheets or scripts in parent window.
Saying that your website is hosted on domain A, and the buttons are hosted on the domain B, what would happen to the button iframe? Well, first of all, if the button iframe on domain B is embedded directly, there is no way to get its content size because the access to its content is blocked by same-origin policy. That’s exactly the problem mdo/github-buttons faces. What about render the button in an empty iframe? Not a bad idea. In that case, since the iframe does not have the src attribute, it’s considered to be in the same-origin as domain A. However, web fonts from domain B will be blocked in the iframe on domain A unless the Cross-Origin Resource Sharing on domain B is properly configured.
So, the answer is combining the two cases here.
The default size of an iframe is 300px by 150px, which can change the document flow significantly. So, it comes the idea of hiding an iframe before it is resized properly. visibility: hidden;
simply does not work because it still takes up space. display: none;
is not a choice because IE 11 does not render it at all. The only way to do it correctly is setting width: 1px; height: 0;
. With that, calling body.scrollWidth()
and body.scrollHeight()
should return the content size of an iframe.
That is not the end of the story. body.scrollWidth()
returns an integer rounded from the actual float value. On an @2x display, the size can be off by 1/2px. In the case that the number is rounded down, the iframe will be cut off. The solution is to use body.getBoundingClientRect()
, which returns the float, and ceil it to the closest physical pixel.
However, that is still not the end of the story. WebKit rounds down iframe size to closest 1/2px on @3x display, which means that in the worst case the iframe will be cut off by 1/3px. Thus, the ultimate solution is to round the float to the closest physical pixel, then ceil the value to closest 1/2px, which is illustrated by the following formula.
Math.ceil(Math.round(px * devicePixelRatio) / devicePixelRatio * 2) / 2 || 0
iframe onload event is not reliable because some browsers implemented the onload event in a wrong way, especially when there are DOM changes during the page loading. In my tests, IE 9 and Android Browser 2.x fire the onload event before the dynamically injected sciprts complete execution. Presto-based Opera has a even more interesting bug. With memory cache turned on, it will never fullfill a javascript request in iframe that is dynamically injected before DOM ready, after the file is cached in memory. So, a polyfill for the iframe.onload event is required for those browsers.
Performance does matter.
So both async and defer attribute are used on the script element. They ensure that the script will never slow down the DOM loading. Also, to reduce requests, the scripts used in parent window and iframe are combined into one script, which lets the browser read it from cache.
GitHub’s iconic web font is pretty awesome, but it brings a little bit trouble. Most of the major browsers request a web font only after they found it’s used somewhere in the web page, and they load the web fonts asynchronously. Sometimes it happens that the window.onload
event is fired earlier than the web fonts are rendered, which leads to the incorrect iframe size.
In addition, there is no native load
event for web font. The typekit/webfontloader may be a solution, but it still load the web font asynchronously, so it doesn’t help much. The web font loader also increases the requests and slows down the whole process in this situation.
The only reliable choice left is pre-rendering, because it doesn’t depend on the web font load event. I calculated all the icon sizes in em unit, then generated a stylesheet containing their sizes.
This project do support the old IE. It sounds crazy, isn’t it? Actually it is not too difficult since most of the incompatibilities are came from the octicons.
IE 6 and 7 don’t support css pseudo-elements :before
and :after
, so I use a css expression hack like this one.
.octicon-mark-github { zoom: expression( this.innerHTML = '' ); }
IE 8 is worse than IE 6 and 7. Although IE 8 supports :before
used in octicons, it still randomly uses the local font instead of the octicons. This related question on Stack Overflow gives a solution that forcing IE 8 to redraw octicons.
Take a look at ntkme/github-buttons!
]]>I created the shell script InstallESD.dmg.tool
when I was trying to install OS X on OS X. However, every time Apple releases a major version of OS X, Apple changes the structure of InstallESD.dmg
. As a result, the complexity of the script keeps increasing and the script is already over 350 lines. Now, it’s time to stop the crappy shell scripting.
Here comes the new iESD. Written entirely in Ruby.
]]>On POSIX and Unix-like operating systems, the $PATH is specified as a list of directories separated by :
. It’s easy to change the $PATH by hardcoding. However, sometimes it is needed to modify the $PATH programmatically and the most difficult part is removing a path from the $PATH elegantly. The definition of elegant here is using a single line command that is human readable, portable, short and fast. The common idea is using the $IFS, but the $IFS is ugly. Instead of that, I am going to use sed
.
I got this idea from jQuery’s .removeClass(). Thank you, jQuery.
Assuming we need to remove /usr/local/bin
from the original $PATH shown as below.
/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
First, prepend and append :
to the $PATH.
:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:
Then, replace :/usr/local/bin:
with :
.
:/usr/bin:/bin:/usr/sbin:/sbin:
Now, remove the prepended and appended :
.
/usr/bin:/bin:/usr/sbin:/sbin
Done!
PATH=`echo ":${PATH}:" | sed -e "s:\:/usr/local/bin\::\::g" -e "s/^://" -e "s/:$//"`
/usr/local/bin
can be replaced with any path or variable containing path.
echo
command prepends and appends :
to the $PATH.|
pipes the output of echo
to sed
.sed
command removes the path. In this command, :
is used as a delimiter, because :
is the only character reserved in the $PATH. As a result, the actual :
in the patterns are escaped as \:
. Although it has some impact on readability, the path to remove will never need to be escaped any more, which means you can use command like sed -e "s:\:${path_to_remove}\::\::g"
without worrying about escaping.sed
command removes the prepended and appended :
.If you are using Homebrew, you may like to add this to your shell startup files.
test -x /usr/local/bin/brew && export PATH=/usr/local/bin:`echo ":${PATH}:" | sed -e "s:\:/usr/local/bin\::\::g" -e "s/^://" -e "s/:$//"`