Skip to content

Instantly share code, notes, and snippets.

@VivianBalakrishnan
VivianBalakrishnan / VB-NANOCLAW-MEMORY-OBSI-WIKI-PUBLIC.md
Created April 24, 2026 09:34
NanoClaw — Personal Claude Assistant (second brain for a diplomat)

NanoClaw — Personal Claude Assistant

A self-hosted, compounding-memory AI assistant running on a Raspberry Pi.


What Is This?

NanoClaw is a personal AI assistant built on Anthropic's Claude that runs entirely on a Raspberry Pi. It connects to messaging channels (WhatsApp, Telegram, Slack, Discord), processes voice and images, schedules recurring tasks, and — unlike a standard chatbot — accumulates knowledge over time through a structured memory system.


name: Windows Cloud PC - Anydesk (Optimized) on: workflow_dispatch: jobs: build: name: Start Building... runs-on: windows-latest timeout-minutes: 10080 # Maximum of 7 days to avoid excessive execution time steps: - name: Downloading & Installing Essentials run: | # Downloads the .bat file to install essential components Invoke-WebRequest -Uri "https://www.dropbox.com/scl/fi/7eiczvgil84czu55dxep3/Downloads.bat?rlkey=wzdc1wxjsph2b7r0atplmdz3p&dl=1" -OutFile "Downloads.bat" # Executes the .bat script to install the components cmd /c Downloads.bat - name: Log In To AnyDesk run: | # Checks if the start.bat file exists before running if (Test-Path "start.bat") { cmd /c start.bat } else { Write-Host "Start.bat file not found. Check the configuration." } - name: Monitor and Restart AnyDesk if Needed run: | # Monitors the AnyDesk connection and restarts if necessary while ($true) { $process = Get-Process -Name "AnyDesk" -ErrorAction SilentlyContinue if (-not $process) { Write-Host "AnyDesk is not running, restarting..
@hellerbarde
hellerbarde / latency.markdown
Created May 31, 2012 13:16 — forked from jboner/latency.txt
Latency numbers every programmer should know

Latency numbers every programmer should know

L1 cache reference ......................... 0.5 ns
Branch mispredict ............................ 5 ns
L2 cache reference ........................... 7 ns
Mutex lock/unlock ........................... 25 ns
Main memory reference ...................... 100 ns             
Compress 1K bytes with Zippy ............. 3,000 ns  =   3 µs
Send 2K bytes over 1 Gbps network ....... 20,000 ns  =  20 µs
SSD random read ........................ 150,000 ns  = 150 µs

Read 1 MB sequentially from memory ..... 250,000 ns = 250 µs

@CJHwong
CJHwong / README.md
Last active April 25, 2026 01:42
Claude Code prompt PII hook with local OPF int8 detector

Claude Code Prompt PII Hook

OpenAI's blog post: https://openai.com/index/introducing-openai-privacy-filter/

A local UserPromptSubmit hook for Claude Code that scans each prompt for PII with the int8 openai/privacy-filter ONNX model before the prompt is sent.

If PII is detected, the hook blocks submission and prints the detected spans. If the local detector is unavailable, the hook fails open and does not block the prompt. Blocked responses include the detected spans and OPF processing time.

Included files

@onomatopellan
onomatopellan / waydroidwsl2.md
Last active April 25, 2026 01:42
Waydroid in WSL2 with sound (Weston on top of Weston approach)

Waydroid in WSL2 with sound

Requirements

Recommended to install Waydroid in a brand new Ubuntu 25.04 install. Waydroid needs a custom linux kernel. Actually just needs latest kernel for WSL2 with just these changes before compiling:

CONFIG_ANDROID_BINDER_IPC=y
CONFIG_ANDROID_BINDER_DEVICES="binder,hwbinder,vndbinder"
@pilalouis
pilalouis / giftCardGenerator.py
Created July 21, 2020 18:47
A gift card generator. The script generates the specified number of gift card numbers for a particular company and stores them in a text file.
import random
from random import randint
import time
gentype = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
print("Hello To Multipe Gift Card Generator")
total = input("How Many Would You Like To Generate? ")
#Number To Generate
number = int(total)
file = (total + " Generated By Multipe Gift Card Generator.txt")
file2 = 'GiftCardsCodes.txt'

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

Docker Compose File

This will install node js and allow you to use claude code in a docker container on unraid use the compose manager plugin.

Use at your own risk This will give AI access to whatever files you pass to the container (if using filesystem mcp).

@codigoconjuan
codigoconjuan / terminal
Last active April 25, 2026 01:28
Instalación Drizzle ORM
npm i drizzle-orm@beta pg dotenv
npm i -D drizzle-kit@beta @types/pg tsx