Discover gists
function Remove-User-Job() { | |
Param ( | |
[string]$Name = "" | |
) | |
if ([string]::IsNullOrEmpty($Name)) { | |
return | |
} | |
Unregister-Event $Name |
from sympy.abc import x, y | |
from sympy.core.compatibility import is_sequence | |
from sympy.core.numbers import oo | |
from sympy.core.relational import Eq | |
from sympy.polys.domains import FiniteField, QQ, RationalField | |
from sympy.solvers.solvers import solve | |
from sympy.ntheory.factor_ import divisors | |
from sympy.ntheory.residue_ntheory import sqrt_mod |
The official guide for setting up Kubernetes using kubeadm
works well for clusters of one architecture. But, the main problem that crops up is the kube-proxy
image defaults to the architecture of the master node (where kubeadm
was run in the first place).
This causes issues when arm
nodes join the cluster, as they will try to execute the amd64
version of kube-proxy
, and will fail.
It turns out that the pod running kube-proxy
is configured using a DaemonSet. With a small edit to the configuration, it's possible to create multiple DaemonSets—one for each architecture.
Follow the instructions at https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ for setting up the master node. I've been using Weave Net as the network plugin; it see
# Authors: Pete Jensen, Natasha Mandryk (math) | |
class ECurve: | |
def __init__(self, p, a, b): | |
# Define basic variables, coefficients and prime p. | |
# (y^2 mod p) = (x^3 + ax + b) mod p | |
self.__a = a # The coefficient: a | |
self.__b = b # The coefficient: b | |
self.__p = p # The prime p. | |
#!/usr/bin/env python3 | |
# Elliptic Curve functions | |
# For educational purposes only. | |
# | |
# N. P. O'Donnell, 2020-2021 | |
from math import inf | |
import random |
Can you tell an LLM "don't hallucinate" and expect it to work? my gut reaction was "oh this is so silly" but upon some reflection, it really isn't. There is actually no reason why it shouldn't work, especially if it was preference-fine-tuned on instructions with "don't hallucinate" in them, and if it a recent commercial model, it likely was.
What does an LLM need in order to follow an instruction? It needs two things:
- an ability to perform then task. Something in its parameters/mechanism should be indicative of the task objective, in a way that can be influenced. (In our case, it should "know" when it hallucinates, and/or should be able to change or adapt its behavior to reduce the chance of hallucinations.)
- an ability to ground the instruction: the model should be able to associate the requested behavior with its parameters/mechanisms. (In our case, the model should associate "don't hallucinate" with the behavior related to 1).
error while loading shared libraries: libnss3.so: cannot open shared object file: No such file or directory
sudo apt-get install libnss3
error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
sudo apt-get install libxss1
#!/bin/bash | |
# SPDX-FileCopyrightText: © 2024 Ryan Carsten Schmidt <https://github.com/ryandesign> | |
# SPDX-License-Identifier: MIT | |
# | |
# Adds ext4 driver to OpenCore Legacy Patcher so that a Linux partition can be selected. | |
# Based on instructions by rikerjoe at: | |
# https://tinkerdifferent.com/threads/dual-boot-linux-mint-21-2-and-macos-with-opencore-legacy-patcher-and-opencore-bootpicker.3115/ | |
set -u |
Latency Comparison Numbers (~2012) | |
---------------------------------- | |
L1 cache reference 0.5 ns | |
Branch mispredict 5 ns | |
L2 cache reference 7 ns 14x L1 cache | |
Mutex lock/unlock 25 ns | |
Main memory reference 100 ns 20x L2 cache, 200x L1 cache | |
Compress 1K bytes with Zippy 3,000 ns 3 us | |
Send 1K bytes over 1 Gbps network 10,000 ns 10 us | |
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD |