For the past month I’ve been transitioning to NixOS, both my work laptop and my personal desktop.
It’s been an amazing voyage. The learning curve is steep for the first 1-2 days, till you grasp the whole idea but then that’s mostly it.
The language syntax is a bit weird and ugly but it really is quite simple - and I say that with 0 background in math or functional languages.
You quickly learn that “RTFM” is not a thing in Nix since there’s no manual and the tools at your disposal aregrep.app to see what others did (though due to the frequent changes in how stuff work could mislead you) and theNixOS forums which is amazing.
The question that arises before you even consider scrapping what you’ve been building for the past N years is “why? yea all the hipsters use it but WHY?”
I was very resilient too since every month or so something new and shiny comes up that nobody asked for, everyone around you loses their mind about it and 2 weeks later it’s abandoned.
welp, I have 2 extremely solid reasons: reproducibility and stability
The idea in NixOS is that your whole OS is split in configurable and non-configurable parts. The configurable is managed by theconfiguration.nix
file. The non-configurable is most probably files under your home directory, like browser files regarding your sessions n stuff that doesn’t make sense to configure statically.
Stability comes from the fact that after you change yourconfiguration.nix
you have torebuild
and thenswitch
to the new generation for it to take effect. Every time yourebuild
, a new “version” of your whole OS (only the managed part, the rest remain intact) gets created andswitch
ing to it activates it.
That means that you’re able to do something magic:rollbacks. Did you break your fstab? Boot to the previous working generation and you’re good to go! Did the latest update break your browser? rollback! The boot manager is configured to give you a choice of the last N generations so you just pick what you want - THAT easy, no weird arcane magic.
That leads us to the second amazing feature: reproducibility.
I know that everyone talks about it when nix comes in the discussion but it does actually affect you, it’s not just a “good principal”.
Imagine being sure that your hacky script that logs you into your machine with port knocking works EVERY time - and when I say every time, I mean it. If the module (aka piece of nix code) that configures it compiles, it WILL work. Just formatted? it works. Moved from i3 to Plasma 6 and btrfs? it works.
It really empowers you and gives a much greater pleasure into optimizing small aspects of your desktop since you know that they’ll be there for quite a long time and are much harder to randomly break.
So that brings us to this post’s title: dotfile golfing.
The dotfiles now hold a much greater power. It’s super easy to set up and keep 2 machines in sync. Just define some host-specific quirks for each machine (e.g. a desktop doesn’t needlaptop-tools
) and you’re good to go.
That has led all the people with weird setups (⇒ people that have adotfiles
repo) to have a much, MUCH better experience. They don’t have to keep track of what they install and how so that they can set up the whole thing again.
Sets of dotifile hacks are now their own “packages” (flakes in nix world) that anyone can use -stylix for example, which tries to do the impossible: make the theming/styling consistent across the whole OS, from NeoVim to GTK to Plymouth, all with the same colors and wallpapers.
You also get the following amazing features as a bonus:
Formatting has become way, WAY easier. After setting up the partitions in a new machine,nixos-install --flake github.com:dzervas/dotfiles
will set up the new machine. The next reboot will have everything ready for me - yes even that port knocking script - honestly the biggest hurdle when setting up a new machine after your nix config has stabilized is logging in to every service & website you use
You can even create an ISO with your whole config ready to go with GitHub actions (as I didhere) so you can work on a new machine without even formatting. Boot the ISO and do whatever needs doing. My work laptop is now officially disposable!
My nix config:dotfiles
This is one of those weird ones where I try to do something batshit crazy and I end up inevitably failing. It was almost too clear that this is gonna be the path but I needed something to hyperfocus on to have some stress-free moments.
This is how I tried to use only the engine/rendering canvas of Fusion and re-implement the whole GUI around it from scratch (toolbars, browser, history bar, etc.).
Ok so let’s take it from the top. I love Fusion’s UX. It’s amazing. Everything is where it should be and stuff just work as you expect them. It does some weird fuckery when you try to direct model (organic handles, mouse shells, etc.) but it’s not the point of the software anyway. And almost nobody can rival it in my eyes. I’ve tried Solidworks, OnShape, BricsCAD, FreeCAD, Solid Edge, SolveSpace, Rhino 3D with Grasshoper. I’ve tried it all. I’ve seen it all. I’m dissapointed.
“Awesome, where’s the problem”, I can hear you say.
Linux. The problem is Linux. Running Fusion on Linux is… shitty. Whilecryinkfly has done an amazing work on the matter, it’s not good enough for me. Crashes, glitches, overlayed artifacts across all workspaces and in general it makes the app quite cumbersome. It kills the UX.
So there were 3 solutions to that problem:
Before I let you know my evil plan (according to Autodesk I imagine), let’s talk a bit about the guts of Fusion
Fusion is written completely in Qt 5, start to finish. At the time of writing (25 March 2023) it uses Qt 5.15.2, the upstream LTS version.
They use MANY of Qt’s features and every component is written in a different “Qt manner”. Is the whole app a Qt demonstration? I guess we’ll never know…
QGuiApplication
windowQWidgets
, buttons, labels, icons, tooltips and the restQtWebEngine
s with a customCEF that gives access to the JS→C++ bridge. That bridge is calledFermontJS
and talks with theNeuron
engine BTWShell
as Autodesk calls it (the main gray window that 3D stuff appear) is something… weird. Something very weird. It’s a class inside theNuBase10.dll
that is calledNu::GraphicsCanvas
The last bit is what we’re looking for. TheQtWidgets
parts can be re-written. The toolbar is dynamically generated based on some XMLs, the design history is by definition programmatically populated, HTML/CSS/JS components are just in the installation folder for anyone to grab,FermontJS
is just a node script andNeuron
can be run in its own process (I think???).Not easy to do all that but certaintly doable.The hard part would be that damnedGraphicsCanvas
. It has many other names,Canvas
,QtCanvas
(sub/super classes) but its the same damned thing. If I could make an instance of it on its own window it would prove that I can actually run the part of Fusion that handles the CAD engine and renders what the user sees.
The plan for the “final form” of the project was:
First of all, theGraphicsCanvas
ain’t “just aQtObject
”. At all. It’s too intertwined with commands, threads, events,Shell
s,Workspace
s, etc. for me to figure out. I just don’t get it. Couldn’t they just wrap aQOpenGLWidget
or aQSurface
and be done with it? Apparently not…
While they DID do both of the above, they also did sooooooo much more beyond that that my reversing skills started laughing hysterically - and not the good kind of laugh, the “oh boy what the actual fuck is this” kind of laugh.
Even so, I thought to start throwing shit to the wall till something sticks. I see the canvas gone, or glitched, or (praying) in a separate window and I whipped out frida.
But frida seems to be as confused as me when dealing with C++ MSVC ABI when dealing with object instances on Windows. I just tried callingQWindow(nullptr)::QWindow
from theQt5Gui.dll
and after that callshow()::QWindow
. Frida wouldn’t have me do my shit. Frida was done with me.
This is the point where I give up. This might seem a small post but it took me a great 6 months to learn what I did. Did you know there’s a “Command Panel” in Fusion? Did you know that there’s agithub repo just describing the commands in it? Did you know that you can dump the Qt tree of Fusion while its running?
Useful Fusion commands (Opened by File > View > Show Text Commands (Ctrl-Alt-C), all commands are Txt):
TextCommands.List/hidden- Show all text commandsOptions.DebugEnvironment/show- Show a debug environment, didn't see something interestingOptions.showAllCommands/on-???Options.showAllOptions- Make MANY preferences visibleToolkit.DumpQt- Dump QT object info. [/styles] [/class] [/rect]
Also got to learn about C++ RTTI and vftables and that nothing can be compared to the hatred that I have for that FUCKING LANGUAGE THAT HELL GIFTED TO US.
REALLY. WHAT THE FUCK IS WRONG WITH YOU PEOPLE.
If someone is able to make a PoC that shows that indeed a Canvas can be drawn on its own (with some boilerplate), I’ll be more than happy to open this can of worms.
Also I re-implemented Fusion’s installer in python that also knows about Fusion’s build versions,check it out.
C ya
As the years have gone by, it seems that cracking software is more and more synonymous to “malware”. As this world no longer knows how to operate in a manner of doing something for the common good, selfless moves that would give access to people that can’t bear the stupid “entry price” have been shadowed by moves that replace that “entry price” with remote access instead of money. It pains me and makes me sad, but at least I can share some aspect of it, as I’d rather not go to jail.
For some reason I’m not completely sure, I’ve picked up the hobby of cracking niche softwares’ licensing mechanisms. Maybe it’s because that I know for a fact that they can be cracked - something that isn’t given for hacking cloud-based software.
This is a research post. I don’t use or share my cracks. I do it for fun. Please don’t hurt me.
My targets are exclusively Windows programs, but I’m sure a lot of the described techniques apply to any OS.
Each time I start cracking a program, I’ve got to set a clear target. Most times is “permanent license that gives me access to everything” but that requires to know the following:
I’d also stop as soon as I found that the app has anti-reversing or serious obfuscation. I’d really like to have fun and not spend the good part of a year for a single app. I’m not a good reverser anyway and I don’t want to be - these people are scary.
Another part of my cracking adventure is developing the “perfect crack”. The one that tinkers the app the least and allows me to maintain it across versions. I don’t want just to patch a DLL. It’s dirty. We’ll see better techniques further down.
Play a bit around with the software see what it does. Identify useful strings (likeTrial
orExpires
) inside the app so you’ve got stuff to search.
How does it behave if you give it a wrong serial number? If you disable the internet? Where does it store license stuff? Maybe in the registry? Are there any interesting keys in there? If the license is stored in a file, poke it. Is it encrypted? Signed? Has a checksum?
At this point there are no wrong directions. Poke the program and start building confidence in the app. What it means to run correctly. You should crash the app at least once. Don’t be afraid, you can just re-install it.
Fire up Burp and pass the whole VM traffic through it (you’re using a VM right? RIIIIGHT???)
Identify the URLs and check for SSL pinning. Then install Burp’s root cert in the VM and check again.
If it’s SSL pinned bypassing it shouldn’t be that hard (we’ll see some examples later)
Right now you’re looking for the following:
In my experience following this route has not been fruitful. In all of such encounters the request and response are signed and sometimes even encrypted. But the most problematic aspect is identifying the required structure that the app expects. Instead of22-dec-2022
that the expiration date is set you try22-dec-2032
- after bypassing the signature check of course. But for some reason it doesn’t work - data are encoded elsewhere as well? Maybe if you changetrial
topremium
? Or toultimate
? Why are there both strings inside the app? Is that case sensitive?
These might seem like easy problems but, let me tell you, they’re definitely not. Compilers have gone mental with optimization and understanding a C++ object through reversing is it’s own mountain. How would you know thatultimate
needs ais_network_license
to false - whiletrial
does not?
From all the programs that I’ve cracked, I’ve never found one that does a “simple enough” network call to check the license. They all use some kind of licensing solution that has signatures and encryptions n stuff that make network-based cracking as hard as “regular” cracking (hooking/patching n stuff).
So now what?
Let me introduce you to the amazing world of: 🌈frida🌈
It’s a stupidly powerful dynamic instrumentation framework, mainly targeting mobile apps but it works great on all Desktop OSes as well. Think of it like Inspect Element for native apps.
It works by injecting a JavaScript Engine inside the target process. That essentially allows you to inject or even replace native function with JavaScript code. Here for example we hook the debugging output the target app runs:
letOutputDebugStringW_export=Module.getExportByName("kernel32.dll","OutputDebugStringW");letOutputDebugStringW_new=newNativeCallback((str) => {// Redefine the code executed - yeap, plain javascript.// `str` is gonna be a pointer as we tell further down.// As the return type is void, we don't have to return anything// Since this is a pointer, we can treat it in any way we like. Here we read it as Utf16 - part of the frida APIconsole.log(str.readUtf16String()); },"void",// Return type ["pointer" ]// Argument list. They will be passed to the above function as regular arguments);
The above can then be run as follows:
frida.exe-fMyAwesomeApp.exe-lHookDebugStringW.js
This will spawn the app but you can also hook onto an already running process with-n
flag instead of-f
. After that you’re thrown into an interactive JS shell that you can enter commands or change the script that is automatically re-applied once it changes on disk. I can’t possibly overstate how powerful this tool is. And it doesn’t stop there!
The exact reason of why would you hook a debug output is not the point. That could be an exported symbol of the program or even one of its DLLs. But what if you just wanna search for a bunch of function names and wanna see if they get called in the specific flow that you’re researching? Enterfrida-trace
.
Instead of writing hooks like the above over and over, that tool does most of the job for you and can also pattern match function names. Almost my first command when I start the reversing phase is the following:
frida-trace.exe-nMyAwesomeApp.exe-i'MyAwesomeApp.exe!*License*'
I run that right before I click some kind of license checking button after I’ve entered the license string and I check to see if any function with a name that matches the pattern*License*
(note: case sensitive) inside the moduleMyAwesomeApp.exe
(note: case sensitive as well - many times the module name is in a different case than the file. UseProcess.enumerateModulesSync()
inside the frida shell) fires up. I’m limiting the search in that module to avoid hooking thousands of functions - which is a very good recipe for an instant crash. You can either make the pattern a bit more targeted and remove the module part or change the module to a spicy named DLL. You can also hook all the functions of a module with-I liblicense_of_MyAwesomeApp.dll
but again, if the exports are too many it’ll crash.
At this point, for the not-so-experienced crackers, I should note that the whole time I’m talking about functions that have exported symbols. If the app has stripped the names of the functions and the function that you’re aiming for is called from inside the module, the aforementioned technique will bear no fruits as there won’t be any functions found to hook. The windows-native functions though (user32.dll
orkernel32.dll
for example) will always work as those DLLs have well known exports. It’s a very accurate way of finding out the environment variables that the app accepts, WMI queries that it does, registry keys that it uses and maybe even some crypto stuff that it uses to check the license.
I don’t know what you’re expecting here but I’m not a good reverser. At all. Fire up ghidra and start looking for strings and go back from there. I’ve got one piece of a one-liner though to find spicy DLLs fast:
fd . ~/.wine/drive_c/Program\Files/MyAwesomeApp -H -t f -x sh -c'strings -a -e l "{}" | rg -i "license" && echo -e "\t>{}"'
What this does is runstrings
to all the files recursively under~/.wine/...
but with a twist: the-e l
flag. This makes all the difference. You see Windows like 16 bit little endian characters. But not always. Maybe big endian (-e b
) or maybe regular ASCII (no-e
flag at all). This note took me a week to find out. Cheers.
While it’s very tempting, don’t invest much time finding “a single function that if returns true everything is super-premium-ultimate-version”. Nowadays everyone loves object orientation and most often than not a “subtle” change could require huge changes in the object that represents the license. I have however stumbled upon such a marvelously written software!
It’s one of the most widely used software in its market and: IT’S CLOUD BASED. Yeap. It’s mostly an electron app that loads remote content and I just didn’t even try to crack it for months. “It should load the code that is required for my specific license” I thought. NOP! It had anisUltimate
function that when hooked to returntrue
, I was magically Ultimate.
Most of the other software though weren’t that nice. Even apps that share just a tiny fraction of the market used some kind of licensing solution that as said before has some difficulty - Stripped symbols, encryption, signing, public/private keys and even sometimes statically compiled crypto functions.
I think that key generators are the epitome of art in terms of cracking. It’s so slick and not intruding and sometimes quite hard to counter-measure from the perspective of the developer so many times it’s resilient to updates. But it’s hard. Very hard.
I’ve stumbled upon a C# app that I cracked using a keygen. I found a “magic license key” to put it in offline mode so that it accepts license keys that are checked using some math. But it used some archaic Windows hashing function that was a pain to re-implement and required some very weird math. It was also hidden in plain site - The functionCheckLicense
was never called (and it also had some even more weird math that took me 3 days to understand that make no actual sense) and the actual function was named something likeCalculateOrbitalTrajectory
. The only way that I could find that was through dynamic instrumentation.
On another app I cracked the public DSA-512 public key that it used to verify the license signature. I had already cracked it through hooking but I wanted to completely own it so I cracked the key - I never got to use it tho as it needed some weird transformations and I got bored. Again, the structuring of data is a huge roadblock. Here’s though how I cracked a DSA-512 public key in 2 days (there are MUCH better and faster ways to do it but that’s the only way it worked for me - also I’m bad at cryptanalysis):
# All the key data have been changed# Extract the PEM public key from inside the binaryopenssl dsa -pubin -in anotherAwesomeApp_public_key.pem -noout -modulusread DSA keyPublic Key=2A0ABA86F22281B123F33D9E073AC921C0F2BCB0114C07F632129B64C3CA4181D84C998C2556DC69CB30E0D6B7CB761274AAFC6834FE74D6721E6EA6BCD68DEA# Hex to decimal$ echo"ibase=16;2A0ABA86F22281B123F33D9E073AC921C0F2BCB0114C07F632129B64C3CA4181D84C998C2556DC69CB30E0D6B7CB761274AAFC6834FE74D6721E6EA6BCD68DEA" | bc22019134240820916317814169763607118015464182266127018258054642617293\88614872096293260765941335270405591230469600688971077042627869124706\949973008545385962# And then run [cado-nfs](http://cado-nfs.gforge.inria.fr/) through docker to# factor the number (steps from [here](https://www.doyler.net/security-not-included/cracking-256-bit-rsa-keys)):$ docker run -d --name anotherAwesomeApp_public cyrilbouvier/cado-nfs.py2201913424082091631781416976360711801546418226612701825805464261729388614872096293260765941335270405591230469600688971077042627869124706949973008545385962$ docker logs -f anotherAwesomeApp_publicUnable to find image'cyrilbouvier/cado-nfs.py:latest' locallylatest: Pulling from cyrilbouvier/cado-nfs.py43c265008fae: Pull complete50baea060b67: Pull complete5f3e0aed5ee6: Pull complete80c73fc9483b: Pull completeDigest: sha256:83513a532bc3cfc09ddc44e9c12b9283ace37736fed29f6259cb2b98a1342ab3Status: Downloaded newer imagefor cyrilbouvier/cado-nfs.py:latestInfo:root: Using default parameter file /cado-nfs/share/cado-nfs-2.2.1/factor/params.c155Info:root: No database exists yetInfo:root: Created temporary directory /tmp/cado.gcsugjj0Info:Database: Opened connection to database /tmp/cado.gcsugjj0/c155.dbInfo:root: Set tasks.threads=6 based on detected physical cpusInfo:root: tasks.polyselect.threads=2Info:root: tasks.sieve.las.threads=2Info:root: slaves.scriptpath is /cado-nfs/binInfo:root: Command line parameters: /cado-nfs/bin/cado-nfs.py2201913424082091631781416976360711801546418226612701825805464261729388614872096293260765941335270405591230469600688971077042627869124706949973008545385962Info:root: If this computation gets interrupted, it can be resumed with /cado-nfs/bin/cado-nfs.py /tmp/cado.gcsugjj0/c155.parameters_snapshot.0Info:Server Launcher: Adding e13824a36734 to whitelist to allow clients on localhost to connectInfo:HTTP server: Using non-threaded HTTPS serverInfo:HTTP server: Using whitelist: localhost,e13824a36734Info:Complete Factorization: Factoring2201913424082091631781416976360711801546418226612701825805464261729388614872096293260765941335270405591230469600688971077042627869124706949973008545385962Info:HTTP server: serving at https://e13824a36734:41869(0.0.0.0)Info:HTTP server: For debugging purposes, the URL above can be accessedif the server.only_registered=False parameter is addedInfo:HTTP server: You can start additional cado-nfs-client.py scripts with parameters: --server=https://e13824a36734:41869 --certsha1=313aa0820967f6db061e8fc9cbf2bde7ecdacab5Info:HTTP server: If you want to start additional clients, remember to add their hosts to server.whitelistInfo:Client Launcher: Starting client id localhost on host localhostInfo:Client Launcher: Starting client id localhost+2 on host localhostInfo:Client Launcher: Starting client id localhost+3 on host localhostInfo:Client Launcher: Running clients: localhost+3(Host localhost, PID 16), localhost+2(Host localhost, PID 14), localhost(Host localhost, PID 12)Info:Polynomial Selection(size optimized): StartingInfo:Polynomial Selection(size optimized):0 polynomials in queue from previous runInfo:Polynomial Selection(size optimized): Adding workunit c155_polyselect1_0-1000 to databaseInfo:Polynomial Selection(size optimized): Adding workunit c155_polyselect1_1000-2000 to databaseInfo:Polynomial Selection(size optimized): Adding workunit c155_polyselect1_2000-3000 to database...Info:Square Root: StartingInfo:Square Root: Creating file of(a,b) valuesInfo:Square Root: finishedInfo:Square Root: Factors:279 104569920747<hidden>81898074331Info:Square Root: Total cpu/real timefor sqrt: 0.02/0.0144372Info:Polynomial Selection(size optimized): Aggregate statistics:Info:Polynomial Selection(size optimized): potential collisions: 71168.8Info:Polynomial Selection(size optimized): raw lognorm(nr/min/av/max/std): 72294/45.370/55.413/60.780/0.874Info:Polynomial Selection(size optimized): optimized lognorm(nr/min/av/max/std): 67782/45.250/50.101/56.280/1.689Info:Polynomial Selection(size optimized):10 best raw logmu:Info:Polynomial Selection(size optimized):10 best opt logmu:Info:Polynomial Selection(size optimized): Total time: 49493.9Info:Polynomial Selection(root optimized): Aggregate statistics:Info:Polynomial Selection(root optimized): Total time: 4050.98Info:Polynomial Selection(root optimized): Rootsieve time: 4050.39Info:Generate Factor Base: Total cpu/real timefor makefb: 21.54/5.23901Info:Generate Free Relations: Total cpu/real timefor freerel: 271.45/44.1077Info:Lattice Sieving: Aggregate statistics:Info:Lattice Sieving: Total number of relations:48074999Info:Lattice Sieving: Average J: 7752.25for1680511 special-q, max bucket fill: 0.732933Info:Lattice Sieving: Total CPU time: 2.90978e+06sInfo:Filtering - Duplicate Removal, splitting pass: Total cpu/real timefor dup1: 104.66/78.7711Info:Filtering - Duplicate Removal, splitting pass: Aggregate statistics:Info:Filtering - Duplicate Removal, splitting pass: CPU timefor dup1: 78.7sInfo:Filtering - Duplicate Removal, removal pass: Total cpu/real timefor dup2: 641.15/153.957Info:Filtering - Singleton removal: Total cpu/real timefor purge: 427.7/143.297Info:Filtering - Merging: Total cpu/real timefor merge: 631.74/620.553Info:Filtering - Merging: Total cpu/real timefor replay: 103.78/94.0045Info:Linear Algebra: Total cpu/real timefor bwc: 160799/0.000371933Info:Linear Algebra: Aggregate statistics:Info:Linear Algebra: Krylov: WCT time 18123.13Info:Linear Algebra: Lingen CPU time 379.27, WCT time 81.51Info:Linear Algebra: Mksol: WCT time 9798.28Info:Quadratic Characters: Total cpu/real timefor characters: 78.49/28.6099Info:Square Root: Total cpu/real timefor sqrt: 0.02/0.0144372Info:HTTP server: Shutting down HTTP serverInfo:Complete Factorization: Total cpu/elapsed timefor entire factorization: 3.12641e+06/720361Info:root: Cleaning up computation data in /tmp/cado.06lc2ugg279 104569920747<hidden>81898074331# Unfortunately that's as far as I got :)
One word:dnSpy. Again, a gift given by gods. Unfortunately frida won’t work with the C# runtime but dnSpy has your back. It fully decompiles the app and is an amazing debugger.
The exact same principals as above apply but the problem is that the resulting crack can’t be a javascript file and this raises a problem that I’ve been obsessing about for the past two weeks: Crack deployment.
As I said in the prologue, I like clean cracks that are transparent to the user (can inspect them easily) and are able to be maintained for future proofing. JavaScript is an amazing solution to the above problems (although it has some others of its own) but as frida isn’t available for C#, we can’t use it. Of course we could patch the binary with dnSpy but patched binaries just don’t cut it for me. They’re dirty. We’ll talk about this problem later.
Well here, we’ve got a problem. I was stunned to find out that the main target of frida, the Java VM is actually the mobile Java VM. Frida-ing “Regular” Java, running on Windows and Linux won’t do it. It spits out some errors about some not found classes (that contain the nameZygote
which is an Android-y name) and doesn’t work. I was heart broken and at that point I didn’t go any further.
Of course there’s JadX for reversing, but it doesn’t offer a debugger for desktop apps either. Why does everyone forget that Java runs on desktops, is above me.
Maybe I’ll come around it and find good tooling around Java, who knows. If you happen to know good tools, leave a comment below
As we’re reaching the end of this post (yes, it has one, even if it doesn’t seem like it) I’d like to close with the problem I’ve been burdened for the last two weeks. Let’s say I’ve written some good frida scripts that do their job and I’ve patched a C# DLL as a PoC. Now what? I don’t want to have to run the app through frida and have a statically patched binary! That’s dirty! When the app updates, I’ll have problems. I break the PE’s signature, it’s hard to replicate, it’s hard to explain and it’s not transparent. I’ve found the following solutions till now - unfortunately without a good implementation (yet?):
MyApp.exe
loadsuser32.dll
frida-gadget can be injected or a nicely written Rust dll can be injected. I’ve not seen a tool like this but I’d like to develop one.libcrack.dll
import that does all the hooking job. Simple and clean - with an asterisk though as that change breaks the PE’s signature, something that I’ve got no idea how big of a problem is. I’ve tried using theLIEF python library but I wasn’t able to run the exe after patching successfully afterinjecting the frida-gadget library - I’ve even opened anissue about itI still don’t get why companies charge such a stupid amount of money for their software when we’re talking about hobbyist clients. It’s a win-win. You’ll never make a 50k$/year sale to that person but if you give it to them for free, it’s almost sure that they’re gonna root for you and advertise you. For free. Also if they land a job around the market you’re in, there’s a pretty good chance that they’ll push the company to buy your software - even for 50k/year.
Maybe I’m too naive. Maybe I “don’t get business”. Truth is, I’m not good at that “capitalism” game.
On the other hand, I tend to draw a line on apps that I crack. There are some amazing software that cost something very reasonable and they get regular updates, good communities n stuff. I like to support them if I can.
In any case, cracking is not that hard, it just requires time. You get to dead ends quite often but it’s not hard to understand what you see. Most times, when I’m stuck I start from a new lead. Eventually everything falls together. Give or take, a month is enough to crack “most” programs (I’m not talking about apps around computer security, such as IDA Pro. I’ve heard it’s a HUGE undertaking).
I’ve been searching for a way to write blog posts through a beautiful, mobile-friendly interface for almost two years.
Notion was always a very good answer but the code required to make it work was always keeping me back. But I finally did it, this blog’s content is now fully hosted in Notion
There are some UIs built specifically for git enabled static sites likeforestry.io but all of them lack the support for custom pieces of markdown (also known asshortcodes in thehugo world). This makes the actual blog posting experience harder, as your general view of the content is quite different to what the user will consume.
There are also blog engines such as Wordpress orghost.org but I didn’t want to manage yet another service, give my data to yet another company or pay yet another subscription.
💡Ghost is probably a very good solution as it’s something between a CMS and static site generator. I didn’t go that way though
The last solution was to useNotion.se as a content editor and generate the actual pages with Hugo through a GitHub Action, as I already do. I already use Notion, Hugo and GitHub separately so no new company or technology was needed to get in the way. Just a little bit of python glue to make them kiss
There are two things that needed to be bridged:
The only thing missing right now (for my needs at least) is gallery support, as it’s a database on its own.
I’m sure there are a lot of blocks missing but I just don’t use them (mainly weird collection views and external blocks). If you’re missing something, open an issue on GitHub or, even better, open an MR.
I decided to use the API that notion uses to render its front end in the web interface, as otherwise I’d have to use the extension API and hassle with secret keys etc. I just hope that the front end API does not change without a notice all the time.
The current state is very pleasant to me and I’m very proud for what it achieves. Maybe in some time I’ll implement the following too, as they would be nice to have:
For the past 5 years I’ve been obsessed about finding a super quick way to make hobby-grade PCBs at home. The race I was looking to win was the “I don’t want to wait 3 weeks being able to do nothing after I remembered that I2C needs in-series resistors”. I want to get my board in my hands in about an hour without doing much.
And I found the way, but most importantly, I found the workflow. Let me show you!
First of all, in case you missed the title, my solution is CNC milling. The upfront cost is quite budget friendly (less than 200 euros), no nasty chemicals are involved and you can safely toss that awfully bad “PCB drill” out the window. The process takes about 30’ of which 5 are manual labor (preparing gcode and toolchanging)
While not on the “simple” side, the tools that are required are cheap and can sit on your workbench.
You’re gonna need the following software setup and working:
The whole idea is the following:
back
file on bCNCProbe tab -> Probe
, ConfigurePos: 0, 0, -3
and pressProbe
button. The spindle will start going down very slowly until it touches the board. PressZ=0
.Autolevel
button and press theZero
button with the crosshairs.Probe tab -> Autolevel
drill
file WITHOUT saving changes to previous file and WITHOUT deleting the probe mesh (it pops up 2 questions that you should both answer with No) and change the toolZ=0
after the probe touchesAnother idea is to squirt some WD-40 or cutting fluid during the cut - I haven’t tested it but it sounds pretty good and maybe WD-40 isn’t conductive (so you don’t have to wipe it during probing for tool changes). An idea byJames.
pcb2gcode requires some configuration to generate correct G-Code for your setup. This can be done either with a gazillion command line flags or through a file, calledmillproject
that has to be on the same directory that the pcb2gcode command is executed. Here is a thoroughly commentedmillproject
for my machine:
millproject
# Pcb2GCode settingsmetric=true # Use mm to read the following values (feeds/speeds/etc.), not imperial inchesmetricoutput=true # Same, but for the outputzero-start=true # Start from 0,0,0zsafe=1 # Safety heightzchange=5 # Height to change a tool - don't over-do it to avoid crushing your Z axissoftware=custom # We're not using Mach or LinuxCNCmirror-axis=1 # Mirror the design to X. Required for the back side# voronoi=1 # (optional) Instead of cutting straight traces, cut the board only in the places that shouldn't connect with each other. Produces very weird boards but it's quite fast and optimal# Milling - Trace engravingzwork=-0.07mm # Depth of engraving - did quite a lot of testing and it seems 0.07 is quite consistentmill-feed=600 # How fast to go, in mm/min - maybe go a bit faster?mill-speed=10000 # How fast to rotate the spindle in RPMmill-diameters=0.30mm # Caluclated by pcb_mill_calc.py - 0.30mm for 0.2mm 60 degree endmillisolation-width=0.55mm # Space between traces - I recommend higher than 0.5mm to be MUCH easier to sold and avoid bridgesmilling-overlap=20% # How much should the passes to create the isolation width overlap - 20% is good# Drillingzdrill=-1.7 # Depth to drill a hole, +0.1mm than the board thickness to have clean holeszmilldrill=-1.7 # Same but for milldrilldrill-side=back # Drill the board from the back sidedrill-feed=25 # Lower Z during drilling at 25mm/s - don't go much higher, CNCs don't like drillingdrill-speed=10000 # How fast to rotate the spindle in RPMdrills-available=0.3mm,0.4mm,0.5mm,0.6mm,0.7mm,0.8mm,0.9mm # Available drill diameters - You "should" have all the diameters smaller than your milldrill bit, if you don't have one it will be rounded to the colsest one you havemilldrill-diameter=1.0mm # Diameter of the milldrill endmill - I suggest 1mm as you have much less toolchanges and it lasts quite longmin-milldrill-hole-diameter=1.0mm # Minimum diameter to milldrill - should be the same with your milldrill diameter# Outlinezcut=-1.7 # Depth of cut for the outlinecut-side=back # Cut the board from the backcut-feed=200 # How fast to cut the board in mm/min - can go a LOT faster I thinkcut-vertfeed=25 # How fast to plunge into the board - don't go much highercut-speed=10000 # How fast to run the spindle in RPMcut-infeed=0.85 # Do the cutting in multiple passes, 0.85mm each - maybe this isn't neededcutter-diameter=1.0mm # Diameter of the cutter - use the milldrill bitbridges=4 # Width of each tab to avoid flying PCBs after the outline is donebridgesnum=2 # Number of tabszbridges=-1.2 # Z height while cutting tabs, -1.2 will result in 0.4mm tabs - 0.4mm is ok# GRBL shenanigans# G64 is not supported by GRBLnog64=true# https://github.com/gnea/grbl/issues/290nog81=truenog91-1=true
While the engraving bit is lets say 0.2mm, you’re cutting 0.07mm lower than the surface so the resulting cut will have a bigger width than 0.2mm. To calculate the effective width of cut I created a python script:
pcb_mill_calc.py
#!/usr/bin/env python3import math# For an example of angle=60 tip=0.2 and depth=0.1 check: https://www.calculator.net/right-triangle-calculator.html?av=0.1&alphav=60&alphaunit=d&bv=&betav=&betaunit=d&cv=&hv=&areav=&perimeterv=&x=97&y=22defcalculate_effective_width(angle, tip, depth=0.1): rads= math.radians((180-angle)/2)return2* depth/ math.tan(rads)+ tipif __name__=="__main__":import sys angle= float(sys.argv[1]) tip= float(sys.argv[2]) depth=0.1try: depth= float(sys.argv[3])exceptIndexError:pass print(f"For the tip with{angle} degree angle,{tip}mm tip and for a{depth}mm depth of cut, the following effective width should be used:") result= calculate_effective_width(angle, tip, depth) recommended= result+ (0.05- (result%0.05)) print(f"\t{result} -> rounded up to 0.05mm (for best results):{round(recommended,2)}")
Usage:python3 pcb_mill_calc.py <bit angle> <bit diameter> <depth of cut>
For example to for my 60° 0.2mm endmill withzwork
0.07mm:
python3 pcb_mill_calc.py 60 0.2 0.07
And I get the following result:
For the tip with 60.0 degree angle, 0.2mm tip and for a 0.07mm depth of cut, the following effective width should be used: 0.28082903768654766 -> rounded up to 0.05mm (for best results): 0.3
Oof, that’s it! That’s the hardest part but it needs to be done only once - you can then copy the file around in projects and tinker it a bit.
The rest is quite easy, execute the following command:
mkdir -p /tmp/gcode&& pcb2gcode\ --front"/tmp/gerbers/${PROJECT}-CuTop.gbr"\ --front-output"/tmp/gcode/${PROJECT}-front.ngc"\ --back"/tmp/gerbers/${PROJECT}-CuBottom.gbr"\ --back-output"/tmp/gcode/${PROJECT}-back.ngc"\ --drill"/tmp/gerbers/${PROJECT}.drl"\ --drill-output"/tmp/gcode/${PROJECT}-drill.ngc"\ --milldrill-output"/tmp/gcode/${PROJECT}-milldirll.ngc"\ --outline"/tmp/gerbers/${PROJECT}-EdgeCuts.gbr"\ --outline-output"/tmp/gcode/${PROJECT}-outline.ngc"
It expects gerbers exported by KiCad (due to theCuTop
naming convention and the rest) to be under /tmp/gerbers and a variablePROJECT
with the name of the KiCad project. If you have indeed a KiCad project, I have an even better command that usingKiKit, generates the gerbers and feeds them to pcb2gcode automatically:
export PROJECT=${$(ls *.kicad_pcb)%.kicad_pcb}&&\ kikit export gerber"${PROJECT}.kicad_pcb" /tmp/gerbers&&\ mkdir -p /tmp/gcode&&\ pcb2gcode\ --front"/tmp/gerbers/${PROJECT}-CuTop.gbr"\ --front-output"/tmp/gcode/${PROJECT}-front.ngc"\ --back"/tmp/gerbers/${PROJECT}-CuBottom.gbr"\ --back-output"/tmp/gcode/${PROJECT}-back.ngc"\ --drill"/tmp/gerbers/${PROJECT}.drl"\ --drill-output"/tmp/gcode/${PROJECT}-drill.ngc"\ --milldrill-output"/tmp/gcode/${PROJECT}-milldirll.ngc"\ --outline"/tmp/gerbers/${PROJECT}-EdgeCuts.gbr"\ --outline-output"/tmp/gcode/${PROJECT}-outline.ngc"
Continue to Step 2 from Workflow and you’ll be done in minutes!
This was quite a journey for me and it took me about 2 years to finish this workflow. Takes about 30’ to make a small board, it’s almost free. The cutters don’t wear much, copper clads are dirt cheap and widely available even in local stores. The boards turn out amazingly well with almost no post-processing required - maybe some flying copper hair.
I’m already preparing the workflow for a double sided PCB workflow, most probably using a spindle camera (it’s cheap, don’t freak out). Stay tuned.
What I haven’t figured out is how to apply solder mask. It needs a weird spring-loaded tool that is able to remove 0.01-0.02 material. If you have any cool ideas, leave a comment!
This is a list on ways to make PCBs at home and why I chose milling over everything else:
I wanna keep this section up-to-date with problems that I stumble upon. If you have any problems, even machine-specific, please leave a comment.
First of all, make sure that you didn’t press the smallAutolevel
button. What this does is apply the autolevel offsets a second time so the result is just like if you hadn’t leveled your workpiece - but from the opposite side
Then, check that your probe wire doesn’t pick up noise from the motors or spindle. You can check this but doing some movement with your machine and watching if a[P]
will randomly show in the machine status. If you have this problem readGrbl Wiki on the matter. Based on that I’ve createdthis board that has 4 optocouplers to isolate the limit switch circuit from the rest of the controller. Worked wanders for me
Often the nRF52 micros get stuck or misbehave and reach a weird state with the pairings. Often the solution is just to clear them so here’s adafruit’s code to do that and aplatform.io ini to make it easy.
platformio.ini
[env:clearbonds]platform= nordicnrf52board= particle_xenonframework= arduino
The board can be any nrf52 board, it can be any generic board that uses the same chip that you actually have. For exampleparticle_xenon
uses nRF52840, so it can be used for any 52840 board. It might though not flash the correct LEDs, so just hook up the serial port.
src/main.cpp
/********************************************************************* This is an example for our nRF52 based Bluefruit LE modules Pick one up today in the adafruit shop! Adafruit invests time and resources providing this open source code, please support Adafruit and open-source hardware by purchasing products from Adafruit! MIT license, check LICENSE for more information All text above, and the splash screen below must be included in any redistribution*********************************************************************//* This sketch remove the folder that contains the bonding information * used by Bluefruit which is "/adafruit/bond" */#include<bluefruit.h>#include<utility/bonding.h>voidsetup() { Serial.begin(115200);while (!Serial ) delay(10);// for nrf52840 with native usb Serial.println("Bluefruit52 Clear Bonds Example"); Serial.println("-------------------------------\n"); Bluefruit.begin(); Serial.println(); Serial.println("----- Before -----\n"); bond_print_list(BLE_GAP_ROLE_PERIPH); bond_print_list(BLE_GAP_ROLE_CENTRAL); Bluefruit.clearBonds(); Bluefruit.Central.clearBonds(); Serial.println(); Serial.println("----- After -----\n"); bond_print_list(BLE_GAP_ROLE_PERIPH); bond_print_list(BLE_GAP_ROLE_CENTRAL);}voidloop() {// Toggle both LEDs every 1 second digitalToggle(LED_RED); delay(1000);}
This is a weekend project to keep your filaments safe & dry. It’s very easy to rebuild and adapt to your needs with (hopefully) available spare parts.
After a long term abusive relationship with the 3D printing hobby, where I was brutally murdered several times as describedhere, it was finally time to find a good partner and settle down. I bought the Original Prusa MK3S. I can finally print dickbutts using plastic. The printer just works, there’s nothing more to add.
Prusa holding a poker face after the Ender 3 told it what it went through
But getting through so much, I can now fully appreciate my printer and do the best I can to keep it happy and a big part of that is to buy good quality filament (I use Prusament and Devil Design) and keep it dry (around 20% humidity and below 60C, for almost all filaments and materials).
There are many ready made solutions to keep your filaments dry. Either purpose-builtfilament dryers or generic vertical food dehydrators to dry a filament before use or after misplacing it inside your pool, but they don’t take care of the permanent/long term storage.
There are also filament containers, which take care of storage as well. This is the most used type as you just set the target humidity and forget it. Of course there are bothready made storage solutions andDIY.
Dehydrating filament boils (hehe) down to more or less the following building blocks:
The printer was laying on a perfectly sized nightstand and it was a very good fit. The filaments were placed on the first drawer and random prints awaiting use as well as some spare parts were sitting on the second drawer. I just needed to somehow create a controlled climate on the first drawer.
My target was to use as much “building blocks” as possible (aka have around), so heat & silica gel.
Silica gel requires no further explanation - as packages from all over the world arrive at your house, you’ll gonna build a big stock of them and never run out.
For the heat part though, I took an interesting turn: Use the small heatbed I had when I tried to make adelta printer and then spent thousands in therapy for PTSD.
The electronics to control the heatbed was the easiest part - I instantly knew I’d use one of the thousands ESP8266 WeMos Minis I had lying around (I had no need for WiFi or the horsepower, but it’s a buck each and I had thousands), with a DHT22 temperature & humidity sensor and probably a screen to have a view on what’s going on
So the plan was the following:
For this recipe you’re going to need:
3 minutes later I had both my debugging and (almost) finalized hardware. Yey!
Lego for adults
At this point I should point out that you can use the exact same components but not in “WeMos mini shield” form and use a breadboard, solder on protoboard or evenmake a board with your 3d printer, but I wouldn’t go that way. Just buy a bunch of WeMos shields from aliexpress for a couple of euros each and never go back. It’s fun!
I should point out that for no apparent reason, my obsession kicked in and I “had” to make a shield for the 5V voltage regulator (I wanted to feed from the same 12V line that I was gonna feed the bed) and a “backpack” shield on the relay that breaks out 2 pins to connect the bed thermistor to. I don’t know why I didn’t use a breadboard. My overengineering could not be tamed.
Another cheat mode I used in this project apart from WeMos isESPHome. I love this lil fella!
ESPHome is a firmware for the ESP family that transforms it to an IoT device. It’s the programming equivalent of Lego (TM) for sensor-based projects in YAML. Definitely check it out - it’s easier than you think and it does not need (but is able to talk to) any other home automation services, devices or bridges.
I say that it’s a cheat as there’s no need for WiFi capability per-se (although it’s nice to watch the humidity on your phone) but I didn’t NOT want it and ESPHome made the whole project much easier and give the ability to program/update it over the air for free (as in beer, freedom, time, the boobs and the rest). Noice.
The resulting YAML I used (reading the thermistor was a tad tricky and I was stupid enough to lose the forum link that explained it):
esphome:name:filament_drawerplatform:ESP8266board:d1_miniwifi:ssid:"Hello"password:"*****"# Enable fallback hotspot (captive portal) in case wifi connection failsap:ssid:"Filament Drawer Fallback Hotspot"password:"**********"captive_portal:# Enable logging# logger:# Enable Home Assistant APIapi:password:"***"ota:password:"***"font: -file:'slkscr.ttf'id:font1size:8 -file:'BebasNeue-Regular.ttf'id:font2size:30 -file:'arial.ttf'id:font3size:12sensor: -platform:dhtpin:D4model:AM2302temperature:name:"Filament Drawer Temperature"id:filament_temphumidity:name:"Filament Drawer Humidity"id:filament_humupdate_interval:1s -platform:ntcsensor:heatbed_sensorid:heatbed_tempcalibration:b_constant:3950reference_temperature:25°Creference_resistance:100kOhm# - 100kOhm -> 25°C# - 1641.9Ohm -> 150°C# - 226.15Ohm -> 250°Cname:HeatBed Temperature -platform:resistanceid:heatbed_sensorsensor:heatbed_sourceconfiguration:UPSTREAMresistor:100kOhm -platform:adcid:heatbed_sourcepin:A0update_interval:neverfilters: -multiply:3.3switch: -platform:gpiopin:D2id:ntc_vccrestore_mode:ALWAYS_OFFinternal:True -platform:gpiopin:D1id:heatbed_powerrestore_mode:ALWAYS_OFFinterval: -interval:0.2sthen: -switch.turn_on:ntc_vcc -component.update:heatbed_source -switch.turn_off:ntc_vcc -interval:1sthen: -if:condition:lambda:'return id(filament_hum).state > 20 and id(filament_temp).state < 50 and id(filament_temp).state > 5 and id(heatbed_temp).state < 52 and id(heatbed_temp).state > 5;'then: -climate.control:id:heatbedmode:AUTOelse: -climate.control:id:heatbedmode:'OFF'climate: -platform:bang_bangid:heatbedname:"HeatBed Controller"sensor:heatbed_tempdefault_target_temperature_low:28.5°Cdefault_target_temperature_high:30°Cheat_action: -switch.turn_on:heatbed_poweridle_action: -switch.turn_off:heatbed_powervisual:min_temperature:20°Cmax_temperature:50°Ctemperature_step:0.5°Cspi:clk_pin:D5mosi_pin:D7display: -platform:pcd8544reset_pin:D0cs_pin:D8dc_pin:D6update_interval:2scontrast:70lambda: |- it.printf(18, 0, id(font1), "Filaments"); it.printf(14, 4, id(font2), "%.1f%%", id(filament_hum).state); it.printf(0, 34, id(font3), "%.1f°C", id(filament_temp).state); it.printf(42, 34, id(font3), "%.0f°C", id(heatbed_temp).state);
It actually did! And pretty well! I wouldn’t want to change any humidity controlling related stuff. Here are some numbers and graphs to make you believe me:
Yey! Graphs and timelines!
Above you see that as soon as the heatbed temperature raises (top red) ambient temperature humidity falls (bottom red). Top blue is ambient temperature - must be kept below the glass temperature of the materials inside the drawer - in my case 60C for PLA & PETG.
What I might fix at some point is to remove the upper wood lip to allow me to sit the filaments vertically - right now they’re sitting horizontally and I can fit 4 of them.
Another thing I’d like is to swap the relay with a mosfet to avoid that clicking sound - most times I don’t even hear it but it would be neat, and as I’m at it design a proper 12V->5V shield.
Why is it that hard to 3D print across years? Why can’t I have consistent printing experience, while not spending a kidney? I don’t get it. Why is the machine constantly failing? I’m a computer guy, I know that human errors are all over the place but how does a machine break on its own so frequently. And don’t get me wrong, it might be a budget Creality Ender 3 but it’s proven to be a good machine and its components are not majestic. This is me… sad…
It’s not often that I’m deeply sad about technology. Most times I’m angry and I do dirty or too opinionated jokes about the subject and I’m feeling better. But at this point, I’m just sad. Today my printer broke again and I have to spent half its cost to fix it. I just want it to do what it was supposed to do, not something else, not hack it, not do it super fast or majestically. I just want to print plastic stuff for fun.
After all these, for some reason I still think that Ender 3 is a piece of nice machinery and I suggest to any 3D printing enthusiast to get one. What I do not suggest is getting into 3D printing in the first place. It’s a sad place where your dreams get brutally murdered and your 48 hour long print fails at the last hour or a small fire takes place.
I got the bug though, so I’ll probably continue to have it as a “hobby” - frightened, anxious and sad. I wish you the best of luck.
Oh Rust, how much I love you… Love atfirst third sight, like I had with my English teacher. She was ugly but I was 10 and she was a female that stood near me for an hour and talked to me in a soothing voice. That’s what Rust is, ugly but it’s there for you with a soothing voice.
On the other side we have C++ that the Arduino Framework is written on. Classes here and there, mixed with C, requiring a 3 day workshop to understand what’s the “standard” way of blinking a LED - hence the headache of each Arduino library taking the matters on their own hands. I hate reading C++ by the way and don’t know how to write it. That’s why I want to just forget about it and just call it from Rust.
I’m gonna usePlatformIO which is the swiss-army-knife for the Arduino Framework - manages libraries, board definitions, toolchains, flashing… Everything that you’d possibly need to write and deploy code to an MCU. Apart from Rust. pio knows nothing about Rust and was never intended to do so.
Now lets make those two KISS, run Rust on MCUs while using the Arduino Framework!
TL;DR: My attempt lives inthis repo. I failed.
What I want to achieve is to be able to calldigitalRead
andSerial.println
from Rust code that will run on my NRF52. I choose the NRF52 cause I want to build a Bluetooth keyboard with it and Rust has officialTier 2 support for it, unlike XTensa (ESP32/8266) and AVR (ATMega/ATTiny).
First of all, let’s lay down some ground rules on HOW I am willing to achieve that:
std - see above
The plan to achieve the above was hella abstract:
Bindgen kinda compiles the header that you pass to it with LVM and generates Rust headers. It’s a marvelous project, but it might miss something. Unless of course it’s C++ code. Then it trips like an LSD overdose.
After some time around though, I got it, I just passed almost all of the compiler flags that platformio was passing to gcc directly tobindgen. It kiiiiinda worked, in a weird way. WIN!
Writing a Rust blinky was easy (the code ishere). WIN!
Platformio compiles the whole framework when you give it empty source code (main.c). WIN!
I copy-pasted the link command that platformio was using and I added Rust’s compiled object file (which can be done usingthis option). And it worked! WIN!
I got the firmware! I WIN! Profit!
I flashed the firmware and actually, the LED blinked. I was excited as fuck. Somewhere at this point I started writing this post and I’d mark it as build: passing, but then…
There’s a reason that I don’t have exact commands of the above steps so everyone can happily write Rust on their little fella. First of all, it’s been almost 2 months that I haven’t touched the project or this draft so I have no idea what I actually did. Second, this did not turn out as a win. While I can blink a LED, there’s almost nothing else I can do.
I started fumbling with platformio to incorporate bindgen execution, Rust compilation and final code linking with just aplatformio run
. Then I metSCons. SCons is the build system that platformio uses to put all these bits and pieces together: toolchains, frameworks, compilers, linkers, linker scripts, source code, header files, etc. I tried to manually change variables, redefine functions, and all the good monkey patching that Python can do but it was a dead end. My brain stack pointer was always overflowing, I just couldn’t follow what was done where and why. Nevertheless, I kindadid it. Didn’t have a good time though.
I could build blinky with one command, good.
Print “Hello World”. Nope. Never. Not a chance. I needed somehow to export theSerial
object from C++ to Rust and callSerial.println
. After hours and hours of reading the headers and the source of the Arduino Framework and trying different options to bindgen, I could not do that. Required huge amount of effort.
Any useful API in Arduino is a C++ class so if I wanted to overcome this, I had to write everything from the ground. That’s when I tossed the project.
I don’t get why C/C++ build systems are so complex. I definitely lack deep knowledge, especially in C++, but come on… This is just too much. Even the Makefiles of a project bigger than 1k SLOC don’t make any sense and you need a manual to understand where anything takes place and why it’s done. It’s a shame.
About the C++ vs bindgen fight, there’s not much to tell, I don’t think that there will be a time where bindgen will be able to handle the code that I read. It’s too complex, it’s too human.
Also there are other solutions to write Rust on an MCU instead of this bad idea:
This is a small journey on how I reverse engineered theMagicForce 68 keyboard and tried to add bluetooth functionality to it. It’s a small keyboard (68 keys, 65%) and is USB-only (it’s not the smart model). It has a controller that I can’t flash with a custom firmware, so I had to hook wires on it.
The first step in determining what I was against, was to at least partially disassemble the keyboard.
After the 6 screws under the keyboard and removed, the bottom cover is free and can be carefully removed as well (it has wires to the mini-USB connector board, so beware). The nice red PCB is now ready to be destroyed 😈
This is what I collected: It uses theHoltek HT68FB550 MCU -Datasheet - LQFP48 package
It exposes in the 5 pin header (bottom left on photo):
VCC
GND
PA0/TCK1/OCDSDA
- Used for debuggingReset/OCDSCK
- Used for debugging & programmingUDN/GPIO0
- USB D-, used for programming
Debugging & programming are different procedures, according to the datasheet, they use different pins. But they refer to a “Holtek Writer” as the programmer AND debugger. I could find only thee-WriterPro. Seems fucked up (no docs, too expensive, not gonna work on linux/open source software, etc.).It is a classic matrix-diode style keyboard, it gives logical 1 (5V if I remember correct) to rows and reads it from the columns (that way because of the direction of the diodes).
Matrix to MCU pin mapping (Rows: Top to Bottom, Columns: Left to Rigth):
Pin Description | Pin | Name |
---|---|---|
NC | 42 | PgDown, PgUp, Insert |
NC | 46 | Shift…, Up |
NC | 44 | Tab…, Delete |
NC | 43 | `123… |
NC | 47 | Ctr…, Left, Down |
NC | 45 | Caps…, Right |
Pin | Pin Description | Name |
---|---|---|
10 | NC | 9 |
34 | PA0/TCK1/OCDSDA | Left |
30 | PD5 | 4 |
14 | PD1 | ` |
7 | PE2 | 8 |
28 | PD3 | 2 |
29 | PD4 | 3 |
31 | PD6 | 5 |
36 | PA2/TP3_1/OSC2 | Delete, Up, Down, Right |
37 | PA3/TCK2 | Backspace, PgDown |
11 | NC | 0 |
26 | NC | =, PgUp |
32 | PD7 | 6 |
27 | PD2 | 1 |
33 | PE0/VDDIO | 7 |
12 | NC | -, Insert |
All LEDs have a common cathode on Pin 39 -PA5/SDIA/TP1_0
and a common anode to Vcc.
These are all the data that I gathered. Also, (spoiler) I ended up desoldering all of the switches to create my own keyboard so I got access to the front of the PCB. It’s empty, but it’s VERY time consuming to remove all the buttons so here are some photos:
Ok, so now we know what we’re up against. But what now?
The idea begun with my frustration with wires - right, bluetooth. But how?
I had anAdafruit Feather Bluefruit at hand, based on the marvellousNRF52832. I love the NRF52 family, but after a bit of research I learned that the 52832 does not have USB support and does not have a “CryptoCell”, which means a crypto accelerator which mean noBLE Secure Connecttion. The NRF52840 offers all these goodies (while the BLE SC support for arduino isunder development at the time of writing) but I had to spend money before even having a PoC. Let’s get to work with the 52832!
There was a side idea, that apart from the regular bluetooth keyboard functionality to add U2F and/or GPG SmartCard support. So I started searching if anything like this exists
There you go,Plikter. It is comprised of the firmware that runs on the feather and 2 daisy chained shift registers (TI CD4021BE) that read the columns as there are not enough pins on the feather - and of course these are on a custom board whose gerbers you’ll find in the repo - made with a plotter following the etching method described perfectly bystavros. Soldering time!
It didn’t work.
I debugged it and I think that the internal resistors on the ports of the keyboard MCU that were connected to the rows & columns were interfering, but I’m not sure.
Anyway, I had a (mostly) ready firmware & hardware for a keyboard and I was too frustrated by flying USB wires on my desktop. I made theSiCK-68, but that’s a story for another time.
Hope you had fun!
First of all, lets flash Adafruit’s NRF52 bootloader for easier future flashing
My J-Link was “Broken. No longer used” - or so the JLink tools said (AKA bought from e-bay). So I had to go toopenocd).
Connect the J-Link (or any SWD capable debugger supported by openocd - even an FT232 breakout will do) to the target - I have a Bluefruit by Adafruit.
pip3 install --user intelhexcd Adafruit_nRF52_Bootloadergit clone https://github.com/adafruit/Adafruit_nRF52_Bootloadergit submodule update --initmake BOARD=feather_nrf52832 allFIRMWARE=lib/softdevice/s132_nrf52_6.1.1/s132_nrf52_6.1.1_softdevice.hexsudo openocd -f board/nordic_nrf52_dk.cfg -c init -c “reset init” -c halt -c “nrf5 mass_erase” -c “program $FIRMWARE verify” -c reset -c exitFIRMWARE=_build/build-feather_nrf52832/feather_nrf52832_bootloader-0.3.2-28-g79a6a0c-nosd.hexsudo openocd -f board/nordic_nrf52_dk.cfg -c init -c “reset init” -c halt -c “program $FIRMWARE verify” -c reset -c exi
💡**NOTE**: `nrf5` command was missing from my package manager’s `openocd` and I needed to install the git version!
Now the bootloader should be flash and we’re able to flash over serial from now on! Lets flash micropython (I advise not flashing master but a stable tag)
git clone https://github.com/micropython/micropythoncd micropython/ports/nrf./drivers/bluetooth/download_ble_stack.shmake BOARD=feather52 SD=s132 FROZEN_MPY_DIR=freeze allpip install --user adafruit-nrfutiladafruit-nrfutil dfu genpkg --dev-type 0x0052 --application build-feather52-s132/firmware.hex firmware.zipadafruit-nrfutil dfu serial --package firmware.zip -p /dev/ttyUSB0 -b115200
Done!
dzervas nrf> miniterm.py --raw /dev/ttyUSB0115200--- Miniterm on /dev/ttyUSB0 115200,8,N,1 ------ Quit: Ctrl+] | Menu: Ctrl+T | Help: Ctrl+T followed by Ctrl+H ---MicroPython v1.12-dirty on 2020-04-23; Bluefruit nRF52 Feather with NRF52832Type"help()"for more information.>>>
If you want to play with other kind of firmware (Rust/C/whatever) and you have to flash ELF orhex
files, here is a little helper (put it on your.bashrc
or.zshrc
):
function adafruit-nrfutil-hex(){ port=${1} file=${2}if["$#" -ne2];then echo"Usage:$0 <port> <hex_file>"return1fiif["$(file"${file}" | cut -d' ' -f 2)"="ELF"];then echo"[+] Converting ELF file to hex" objcopy -O ihex"${file}""${file}.hex" file="${file}.hex"fi echo"[+] Generating package" adafruit-nrfutil dfu genpkg --dev-type 0x0052 --application"${file}""${file}.zip" echo"[+] Flashing package over UART" adafruit-nrfutil --verbose dfu serial --package"${file}.zip" --port"${port}" --baudrate115200 --singlebank --touch1200}
💡This whole setup described is deprecated. Cloudflare offers this whole service for free with a much easier setup and 0 maintenance. Reported by an [HN comment](https://news.ycombinator.com/item?id=22838330) (my handle on HN is ttouch). Don’t use what I describe bellow unless you really have a reason not to use Cloudflare. That’s what this blog is about. Failures 🙂
That’s what my mother always said when I was little. And don’t talk to strangers. And the cold comes from the feet (so never walk barefoot). I never got it. How the hell do you use proper signed certificates in a private network? Why have a house if you can’t walk around barefoot? Anyway…
This is my trip on using Let’s Encrypt in a homelab setup on a very limited budget. It should be a fire & forget implementation. I don’t want toscp
20 certificates every 3 months but it has to be a secure implementation as well - exposing the internal services to the internet is a no-go.
A little side note to the readers that are not yet sure why I don’t go to a PKI (aka managing my own CA) solution: That CA can sign ANYTHING, even google.com, so if the CA gets compromised, as long as you don’t notice, you’re on deep shit… That can be solved with theName Constraints extension (limits the domains that a CA can sign to a certain domain or TLD). But then again, where do you keep it? HSMs are pricey. Even then, will you enter the password every now and then? On which machine? Will it be air-gapped? How do you transfer the CSR? Or maybe you set up your own ACME provider (likestep ca does)? Then you will have to harden the whole machine as it’s not air-gapped…
Also mobiles no longer trust user provided CAs. Actually they do, but they do so only for the built-in browser & mail client, so you lose any native app that supports your self hosted services (ex. Home Assistant app).
This struck me on a Monday night ~4 A.M. while trying to sleep. I was thinking all the things that I explained above. How? Where? For how long?
Thenpoof, out of nowhere: use a whole domain, or a subdomain that points at a DNS server inside my home network, just to prove LE that you own it and use the signed certificates however I like.
At that point, I thought that this was kinda abusing Let’s Encrypt, but then again, isn’t that how VPCs work now?
Of course I would have to make my local DNS server “spoof” that domain and make it point to local IP addresses (by hand, can’t trust DHCP clients mess with my certificates…). Well that’s super easy, a DNS server is already running on my network (resolves DHCP hostnames) and I have root access on it (I have anAlix2) so if I’m gonna run all of my services on a single server, I can put a wildcard A record (each service will have its own subdomain).
So I’ll need a dynamic DNS (paid service or if self hosted, yet another moving part). But wait, why set up that shit and not create a VPN tunnel between a random VPS and my server and forward any DNS requests to the VPS to my server over VPN? Bingo! :D
Let’s Encrypt is a CA that issues certificates for free AND automatically. It’s really amazing. They did the web a better place!
What they need to know to sign my certificate, is just that I actually own the domain I say I do. Nothing more. But how do I prove such thing?
They use the ACME protocol to certify that I own the sub/domain I request a certificate for. I have to successfully complete a challenge in order for them to verify it’s me. I won’t get into much details - as I actually don’t know the whole process - but there are 3 available challenges (pick 1):
Let’s Encrypt gives a token to your ACME client, and your ACME client puts a file on your web server at http://<YOUR_DOMAIN>/.well-known/acme-challenge/ (as describedhere)
With DNS, a TXT record should be hosted containing a random string that LE gave, at a specific subdomain of the subdomain we’re trying to sign (_acme_challenge.<YOUR_SUBDOMAIN>.
). The DNS challenge is also the only challenge that has the ability to issue a wildcard certificate (as there’s no way with an HTTP request prove that I’m in control of all of the subdomains, unlike a DNS wildcard record).
For more info about ACME challenges clap (click/tap)this.
Of course nobody wants to move around random strings by hand or create new certificates every 3 months (LE only signs certificate for 3 months max), so there are a bunch ofACME clients that handle all that fuss. I’d just have to reload the certificate every 3 months - or just restart the whole service.
Ok, all this sound good (and a bit complicated) but how will I get the green lock on my plex porn cluster you ask? Let me show you, I answer…
Stuff we need:
home.whynot.net
<service>.home.whynot.net
to the appropriate machine(s)Ahoy reader, this is kinda an open letter. But mostly is my desperation in computer font. Rust why don’t you love me? I read about you, I spent nights and days. I fought the borrow checker monster for you. I learned about lifetimes. And you promised two things: - “Systems” language, close to C - Memory safety
I just wanna call you via a C binary. I don’t want you to fly. I just wanna love you…
But before we get to the chase, lets get to drama. You know how I love backstories (and I’ve been watching How to get away with murder).
The project that all this happend for is not a new idea to me. It boiled inside me for quite some time. I’m referring to mage I started writing it about 6 months ago, when a friend asked me for a stable tool that is able to listen for TCP shells and have TTY support for his OSCP (that’s a story for another day, for more check outnetcatty). Of course I stopped whatever (another project) I was doing and started coding. I was currently into Go, so I went with it. As I was writing netcatty, first of all I lost a huge oportunity to name it netkitty which is way better and second I started spiraling out about what I could actually do. Why only TCP? I can do better!
That’s where mage was born. Mage is a tiny protocol, intended to be encapsulated inside all kinds of transports. HTTP requests, headers, cookies, TCP timestamps, DNS queries etc. What a marvelous idea!
Remember when post-exploitation toolkits & implants communicated with the C2 over TCP or HTTP? Remember when you could feel the earth shaking every time a meterpreter payload exited the final gateway of a target cause a stray UDP connection to a russian server on port 1337 just opened? Remember when the connection would get killed and your server banned after 5’ you got a shell? Well mage is willing to do its magic to stop this madness.
The idea is that you generate a binary payload (msf or whatever) and you “wrap” it using mage. By wrap I mean that the mage .so (or .dll) would be injected inside the binary and then binary patch all thesocket.h
(orwinsock
) calls to use the mage functions (spoiler: “wrapping” is not yet implemented, that’s what this post is about).
The mage primarilly does the following: - Connect to the C2 (completely ignoring the address that the implant wanted to connect to) over whatever protocol you set up during wrapping - Exchange keys with the server (libsodium) - Start encrypted communication with the server (libsodium)
Useful features include chunking, very low overhead, support for out-of-order packet reception (and maybe sometime packet retransmission?)
That’s all good, but I’m still talking about a Go project huh? No…
As any good project, you have to write it at least twice for it to be good. I think I maybe overdid it and rewrote it too fast.Here. I rewrote it in rust. Rust was a much better fit, as it’s much closer to the system, it doesn’t carry a GC and the overal Rust -insert-lang-here
interfacing I THOUGHT was easier. If I only knew…
As said, wrapping is not ready. Nor any actually useful transports. Right now the protocol, encryption/decryption, multiplexing and thread channels are ready. To start implementing wrapping, I had to create a libC API. All answers led tocbindgen
, a very cool project that all it does is generate C headers, but to use it, you need to create a C API!
The “final” struct that I wanted to export wasConnection
(seehere):
pubstructConnection<'conn> {pub id:u32, stream:Stream, reader:&'connmutdyn Read, writer:&'connmutdyn Write, channels:HashMap<u8, Vec<(Sender<Vec<u8>>, Receiver<Vec<u8>>)>>}impl<'conn> Connection<'conn> {pubfnnew(id:u32, reader:&'connmutimpl Read, writer:&'connmutimpl Write, server:bool, seed:&[u8], remote_key:&[u8]) -> Result<Self>...}impl Readfor Connection<'_> {...}impl Writefor Connection<'_> {...}
The C API has to be like that (to be in-place compatible withsocket.h
):
#[no_mangle]pubunsafeextern"C"fnconnect(_socket:c_int, _sockaddr:*const c_void, _address_len:c_void) ->c_int#[no_mangle]pubunsafeextern"C"fnsend(_socket:c_int, msg:*const c_void, size:usize, _flags:c_int) ->usize#[no_mangle]pubunsafeextern"C"fnrecv(_socket:c_int, msg:*mut c_void, size:usize, _flags:c_int) ->usize
It does not implement all thesocket.h
functions, but I started with the most vital ones.
This is my target, exposeconnect
,send
andrecv
to C and let them handle the whole logic. No mystery threads n’ stuff, it could mess a lot wit AV evasion (while it AV evasion has nothing to do with this project, I shouldn’t make it harder)
The problem that I quickly realized was that there was no way to have a “state”. I couldn’t just pass theConnection
struct back & forth, assocket.h
does! I have to adhere to the function signatures and if someone messes with my struct in a completely unchecked manner, anything could go wrong.
So I went on and tried to create a static object holding aConnection
, that would be initialized onconnect
. Oh the horror…
Rust says that it needs to know the size ofConnection
at compile time to let me have it as static. That’s not possible. I add a reference, but it can’t live long enough so I go withBox
.lazy_static
enters the game. No idea what it does, but it solved a problem with static. But introduces another. No mutability. So I add aMutex
.
Right now we have this:
lazy_static! {staticrefCONN:Mutex<Option<Box<Connection>>>= Mutex::new(None);}
Ok, that’s fine. It compiles (nobody knows if this actually works yet). But then starts another rabbit hole. Insideconnect
, among other stuff I callConnection::new
.Connection::new
acceptsreader: &'conn mut impl Read, writer: &'conn mut impl Write
. And these are satisified by aTcpStream
andTcpStream.try_clone.unwrap()
. Now I can’t borrow these, as they’re inside the function scope.
This is the problem. I don’t know how to passTcpStream
to a new staticConnection
. I tried making it static as well,Box
ing andRc
ing them. Didn’t fucking work. If anyone can help, please do so…
I know that you don’t read this type of posts often - or I don’t often read them (ranting due to lack of skill). But this was mainly a rubber duck debugging session for me and it’s one of the very few moments that I’m so stuck, that I’m thinking about abandoning the project. Most times I just get bored or find something new. This is different. I’ve hit a brick wall and can’t find even a really dirty hack around it (even though I hate “hacky” code).
Happy Hacking!
Take a breath, sit back and think: Why the fuck not spend some money instead of endless, painful hours? You don’t have money ok, ok, but what if you…WAIT. Wow, a new world, 3D printing, ACTUAL 3D printed hollow cubes that I PRINTED. Oh wait, is that smoke?
Welcome ladies and gentlemen on another miserable and painful part oooooofdrumroll the CHINESE FACTORY!!! Starring: - Tears - Anxiety - A Creality Ender 3 3D Printer kit that I got from gearbest - 1 month waiting - Smokey electronics
This time, once again, I thought I knew, that I learned from my mistakes. If you don’t know what I’m talking about, checkthis out. It was time to buy my first, whole, 3D printer kit. Backed by a very big community, Ender 3 by Creality was a very good shot. Everyone was astonshed by the results that this baby could achieve. Medium printing speed, medium noise, but the object was very nice. I also ordered with it TWO BlTouch clones. Not one (cause I knew), but TWO. Noice.
I saw a couple youtube videos, just to be sure and get some tips, but the build process was pretty simple. They nagged that it lacked a step or two or a screw, but with my experience on the Rostock Mini and the Prusa i3, I couldn’t even get why they were nagging. I had spares of everything: nuts, bolts, motors, drivers, boards, beds. I was fairly sure that as soon as something goes wrong (cause I was FAIRLY sure about that), I’ll be able to replace it. The device even came pre-flashed! I didn’t even have to fiddle with Marlin! What I was missing all these years…
After I put everything together, I turn on the machine, I select a pre-sliced model and I hit print. AND IT PRINTS. The bed was a bit off. BUT IT PRINTS. I tried not to cry cause that shithole that I called home would flood in an instant and burn my printer.
Ok, everything works, I level the bed a bit and I fire up Cura for some serious (meh) shit: my first useful model. A littleventring to cool the parts all around the hotend. And it prints it smooth as fuck. I just couldn’t believe it… I had a working 3D printer!
So now, lets mount the BLTouch. I start printing the model, but somehow, I forgot that the silver “ear” of the paper clip holding the bed is open on the left side of the bed. And it gets stuck on the Z axis. And before I know it, the cable connecting the controller to the screen is orange-ish and the controller is smoking a pack of Marlboros…
I turn off everything, I panic and I’m just staring at the black aluminum brick that I had in front of me…
I plug the printer again, to see what the damage was: XYZ and the screen were dead. Everything else seemed fine. Thermistors, heaters, extruder, all fine. The screen was completely optional to me, as I ran the printer via USB, so the real problem was XYZ. Debugging was officially in progress…
I was pretty sure that the stepper controllers were fried. They are fragile and it happens. Problem was that they were soldered on the PCB. BTW, a quick note:
Dear 3D printing community, After I was done panicking and crying over my dead printer, I remembered that I know how to wield a soldering iron. So I found the pinouts of the Creality “Melzi” board and scratched the traces (to expose some copper and solder on it) of
dir
&step
to break out the pins and hook them on the backup stepper drivers that I had. I quickly soldered a circuit on protoboard (with solder bridges) to have a nice pinout and hooked the board on it.
Nope, XYZ still dead. Ugh…
The logic analyzer kindly explained to me that the pins coming from the MCU, were dead - this happens when you feed weird stuff to AVR (e.g. over 5v). They don’t die, they give away that specific pin, they are very tough…
Well ok, that’s fine, I had a lot of spare pins in the screen connector, now unused. The idea was to remap the step/dir pins to them. That was fairly easy, as I had hooked the whole lanes from the PCB to the steppers, without cutting any traces. After some pain to understand which screen pin is which, I finally did it :) I changed which MCU pins talk to the stepper drivers.
It was alive :)
After that incident, about a year passed and I don’t know why I left the printer on the side. I had fiddled with it too much, I did stuff that where not neceserry. I tried to switch to RAMPS 1.4, but had problem with the heating elements not heating enough, even after cutting the D1 diode (spoiler: it was the polyfuse), I switched to klipper from Marlin and broke the printer into two boards etc. etc. I didn’t get to print anything at that point. Always something was problematic and didn’t let me print.
About a year passes and I get a girlfriend. I tell her that I have a 3D printer that is currently in an unknown state. She was AMAZED and asked me why I don’t fix it. That was it. That was the slap that I needed to get back in track and fix the damn thing. When a partner gets excited about a nerdy thing you don’t let neither the partner nor the thing go. You just hold on to what you do. Until they orgasm. Or until you orgasm. Or both. Or until you finish the project (lol). Anyway…
I ordered a replacement board. Gearbest gave me $50 off and I got a new official Creality board for about 20mor**e.Kindastupidmoveasthepricewasinsan**e(70 for an arduino with 4 stepper drivers), I know, but I couldn’t get back to the rabbit hole again…
I hook it up, I hook up the BLTouch andBOOM. It works… and unbelievably silent. A quick google search showed that I luckily upgraded to TMC drivers that are extremely silent. The only noise was the fans! Wow…
After some playing around and overcoming some difficulties: -G28
negates bed leveling, you need a Marlin setting to fix that - Probe X/Y offset settings are not for fun, the bed mesh is shifted - Teflon tube on the hotend goes ALL the way down to the heatblock - PLA pieces can get into the hotend fan and stop it (and we know what that means) - PLA and PLA+ are not the same - Ender 3 plastic extruder is trash - get the aluminum one - Glass bed with carbon finish is amazing - well worth the 20$ - OctoPrint is quite neat & it can send you image notification on Telegram
The printer was actually fine. It still prints very nicely! I’ve print several stuff for home and the GF even got the handle oftinkercad and designed & printed some stuff!
WhyNot.Fail is not only about fails, but for success stories too, as 99% of the time they include massive failures.
All I ever wanted, was a tiny Chinese factory inside my house. I’m not talking about the people, suicidal thoughts or racism against Chinese people, I’m talking about being able to make stuff. Quickly and effortlessly. So, instead of paying a factory and waiting for a month, I’d like to pay 10x the cost and cry myself to sleep, before I can get what I want. But it’s a one time thing :) (per machine).
After having so much stuff in mind, it was clear what was the first machine that I needed:angel voices a 3D printer.
Not only I wanted to make several dumb plastic stuff for the house (like a hanger, soap holder, etc.) and enclosures for several hardware projects, but most importantly: make OTHER machines. It’s like the saying: Crack a hash and you’ll pwn 1 machine, learn to phish and you’ll pwn the planet. So, it was clear (although the saying is not), I NEEDED a 3D Printer.
I have had experience with a 3D printer and it was absolutely horrible: Warped bed, 2 broken control boards (!) and more than 200 hours of debugging and never been able to print anything but a 10x10mm hollow box. It was a Prusa i3. What did I learned from that:
So after having learned that, you think I’d have an idea on where to head next, for a successful 3D Printer. A ready kit that it’s proven to work and has a wide community? LOL NO. A FUCKING REPRAP DELTA PRINTER THAN NOBODY HAS EVER HEARD BEFORE. I have no idea how I settled for that thing, really. Maybe the lack of money. The printer I’m talking about isRostock Mini.
So, let’s start gathering parts:3DHubs for 3D printed parts and eBay for everything else. Now great ideas started flying… Why get a 5mm carbon rod that the RepRap clearly references? I’ll get a 6mm! Sure it’ll work the same! Order the printer base cut in CNC? Nah, lets gather everything else and I’ll just make some holes on a piece of wood. That fan for the all metal E3D v5 hotend is optional, I’m sure.
Can see where this is going? Let’s break down this disaster…
As this is a delta 3D printer, it uses some rods to hold the “effector”. The effector is the base that holds the hotend. It needs to be very lightweight, as the motors are pretty far away and the only thing moving the whole construction are GT2 belts. So a good idea is carbon fiber rods. The printer was designed for 5mm carbon fiber rods but I got 1m long 6mm OD carbon fiber rod and cut it by hand.
The rods where not of the exactly same length and 3D printing teaches you that almost everything has to be precise as fuck. But that did not prove as huge of a problem as the reality of 5mm vs 6mm holes for the rods on tiny plastic parts. When I realised that, I tried drilling the holes. But the parts just broke (duh…).
So I went for the second best option: find a solution that does not involve cutting the rod by hand and re-printing the small parts that the rods go in.Kossel, that everybody knows and loves, uses some rods that are metal but are ready to go. Their length was precise, they have bearings on their ends etc. They were 20cm instead of 15cm, but I calculated and saw that I’ll just lose some printing area, that was fine.
Months pass and the rods arrive. YEY! I try to fit them on the effector. NOOOO. They go all over the place, as the bearings are much smaller than the original printed part… Fuck…
So I got lock nuts, to hold them in place. That seemed to work, so let’s move on…
I was not able to find a CNC inside Greece and for some reason I didn’t want to use 3DHubs (maybe it had no support for CNC yet?). I thought that I’ll be able to get around it by using a piece of wood and make some holes. NOPE.
3D printing teaches you that almost everything has to be precise as fuck. Uuugh… Some time passes and I finally find a CNC, right in my city, Heraklion. YEYA. I cut the base DXFs in a CNC! YEYA!!! Wait, that doesn’t seem right. The holes are way off. No…
But STILL, I thought that this piece of crap might be able to work, so lets move on…
I have everything hooked on RAMPS 1.4, motors seem to work correct, endstops work, heatbed works, hotend works. Lets try to melt some PLA. YEY IT WORKS!
I just play around for some time (less than an hour) and everything goes to hell. I thought that currently I didn’t need the PROVIDED fan that sits on the hotend, as I was just testing… The nut that holds the PTFE (push fit nut?) had melted…
That was the last sign that I needed to let this project to the side, until I get a proper 3D printer as a kit with auto bed leveling and a community to support it…
Till then, I’ll be crying in my shower, see ya!
Your next door humanoid with a grain of love for programming, infosec, electronics, martial arts/acrobatics, binge watching series till you can’t move. Excited under-engineer (== that’s boring, I’ll automate it, but I’m too bored to automate it) and rational buyer, cause “I’ll need it for my next project”. Here you’ll find failed or unfinised projects and maybe, JUST MAYBE finished projects - but don’t get your hopes up.
I’m Dimitris Zervas and this is my blog, hopefuly. Lets hope that this project will fly. That’s my hobby, projects about projects.
For questions/ideas/whatever, hit me up at dzervas at dzervas dot gr or at Exarchia, Athens, Greece (don’t hit me for real, it’s a saying).
PS: I often cry over code that does not work as I imagined or my 3D printer. You’ve been warned.
Have a nice trip!