1. 48
    1. 24

      The situation with CGI today reminds me of this quote from William Gibson’s Johnny Mnemonic:

      I put the shotgun in an Adidas bag and padded it out with four pairs of tennis socks, not my style at all, but that was what I was aiming for: If they think you’re crude, go technical; if they think you’re technical, go crude. I’m a very technical boy. So I decided to get as crude as possible. These days, though, you have to be pretty technical before you can even aspire to crudeness.

      I know there are a lot of legitimate situations where startup time makes CGI not a viable option, but damn, when it works, it is soooo nice how few moving parts there are. The Lua VM that I use for my own scripts has a startup time of roughly 15ms which is much less than a rounding error on today’s web, but I prefer to leave the bulk of the pages as static HTML and only use CGI for a few pages where dynamic content is actually required.

      I wrote a bit about using CGI for a survey I’ve been running the past few years: https://technomancy.us/196

      1. 13

        Somehow this comment reminded me of Richard Hipp :)

        https://www.mail-archive.com/[email protected]/msg02065.html (2011)

        https://news.ycombinator.com/item?id=3036124

        It is run off of inetd. For each inbound HTTP request, a new process is created which runs the program implemented by the C file shown above.

        This server takes over a quarter million requests per day, 10GB of traffic/day, and it does so using less than 3% of of the CPU on a virtual machine that is a 1/20th slice of a real server.

        How much more efficient does that need to be? Sure, it won’t scale up to Google or Facebook loads, but it doesn’t need to.

        Funny how this was 13 years ago, and computers are way more powerful now than they were then.

        (e.g. the 128 core / 512 GB RAM server quoted in the intro … my point is that this needs only 1 Linux kernel to manage it – it doesn’t need 128 of them.)

        And damn how did I neglect to put this link in the blog post …


        I also like your Lisp example. The mixing of static and dynamic content is exactly why I use CGI / FastCGI … it’s just good engineering, and it was borne out by the Dreamhost outage. Nobody noticed that the site went down because the static parts stayed up. This architecture has better availability

        Serving static files shouldn’t be coupled to dynamic programs that change and are redeployed often. What makes software unreliable is things that change quickly, so you should separate the parts of your stack that change from the parts that don’t. I can change static files, or I can change CGI scripts, and they are separate.

        Hipp also uses precisely this technique for efficiency. The total load on the server is lower if you do the easy thing with the fast path (static files), and then the harder thing (dynamic content) with a program.

        Another comment … apparently I have been thinking about CGI for decades :) https://news.ycombinator.com/item?id=16194174

        1. 2

          Maybe I’m being overly skeptical, but a quarter million requests a day is about 3 req/sec. Probably at peak it’s higher. I get the point but I don’t think there is a solution that wouldn’t work efficiently for this load, even 13 years ago.

          1. 2

            The reminder is because both this thread and the linked Hacker News thread have comments implying otherwise:

            CGI is extremely wasteful for CPU and IO compared to persistent processes

            and https://news.ycombinator.com/item?id=40729671

            Again, the point of the blog post that CGI itself is not slow. With the rise of Go and Rust, it’s actually more feasible to use CGI than it was with Python/Ruby.

            1. 1

              I’ve had trouble wrapping my head around the idea of CGI (more so Fast CGI,) in a compiled language when you can take things a step further and just write a proper web server in the same language. There’s a gap in my understanding somewhere here.

              1. 3

                It’s so you have the PHP “copy file” deployment model , which is also the “serverless” cloud model - https://lobste.rs/s/saqp6t/comments_on_scripting_cgi_fastcgi#c_gbv52c

                I don’t care about OS versions, upgrades, patches, SSL certs

                And because the process management of CGI, FastCGI, (and to some extent systemd socket activation) make shared hosting economical

                https://lobste.rs/s/kvqpan/what_is_self_hosted_what_is_stack#c_jpbyei


                In other words, if you have a 128 core/thread 512 GB RAM box, you can run say 10,000 apps from 2,000 users easily, with one kernel, and one web server (managing 2,000 certs)

                You don’t want 2,000 Linux kernels to be upgraded and maintained by 2000 app owners (for most apps). You want one.

                Unix is a multi-user system.

                Basically, shared hosting is a proven and economical model (dozens of companies, millions of customers, millions of dollars in revenue) for using computing resources, for mapping apps to hardware.

                It’s exactly what the cloud did on a larger scale (AWS Lambda). But the cloud lacks open standards like CGI and FastCGI, which again have process management

    2. 8

      Does anyone have any knowledge or educated guesses about what broke FastCGI support on DreamHost, and particularly why it’s triggering a cgroup-related limit that’s even preventing SSH from working? I wouldn’t be surprised if the venerable suexec (long used by shared hosts) is clashing with something that systemd is now doing, or some other newer implementation of per-user limits based on cgroups. But I haven’t dug into this myself.

    3. 5

      Half a year ago there were two related discussions on Lobsters about CGI, etc. Some good pros and cons can be learnt from that. I’m linking them for reference and completeness:

    4. 3

      It’s kind of bizarre seeing a web host saying “php-fpm negates the need for fastcgi”. It’s kind of like saying “ssh negates the need for a shell”.

      Hint: the f in fpm is for “fastcgi”. So any host or server using php-fpm (which is generally the standard way to run php these days) is absolutely using fastcgi.

      1. 1

        Which part are you replying to? Who’s saying that?

        What I’m saying is that FastCGI now appears to be an implementation detail of PHP. It is limited to php-fpm.

        Prove me wrong by finding me a web host that will run Python/Go/Rust FastCGI scripts – where I just drop in a file and perhaps configure .htaccess (like PHP)!

        I looked for many days and I don’t think it exists anymore. I signed up for 2 new shared hosts (Mythic Beasts and OpalStack)

        Related comment - https://lobste.rs/s/kvqpan/what_is_self_hosted_what_is_stack#c_3ugitl


        The thing that runs Python FastCGI I believe is mod_fcgid – this is the thing that Dreamhost is unable to support

        https://httpd.apache.org/mod_fcgid/

        mod_fcgid is a high performance alternative to mod_cgi or mod_cgid, which starts a sufficient number instances of the CGI program to handle concurrent requests, and these programs remain running to handle further incoming requests. It is favored by the PHP developers, for example, as a preferred alternative to running mod_php in-process, delivering very similar performance.

        I am actually not sure the second sentence is true. I would be very interested to know though!

        I kinda feel like the PHP developers kinda went their own way and just work on php-fpm, but I am not 100% sure about the relationship between codebases / maintainers

        Also I don’t know why there are even two different Apache modules, and what exactly in php-fpm is specific to PHP. I guess it probably has some extra hooks into the PHP interpreter?

        FastCGI has both a process management part and a binary protocol part. mod_fcgid does both I think.

        1. 5

          Prove me wrong by finding me a web host that will run Python/Go/Rust FastCGI scripts – where I just drop in a file and perhaps configure .htaccess (like PHP)!

          Fun fact: Go has CGI and FastCGI in stdlib.

          https://pkg.go.dev/net/http/[email protected] https://pkg.go.dev/net/http/[email protected]

          So technically speaking should work. I even tried it shortly for fun, but either the hoster didn’t actually support it (it seemed like CGI equaled Perl for them) or I just didn’t manage.

        2. 3

          Prove me wrong by finding me a web host that will run Python/Go/Rust FastCGI scripts – where I just drop in a file and perhaps configure .htaccess (like PHP)!

          Here you go.

          I hadn’t realised until I read their FaQ just now (in spite of knowing about them for over 20 years) that they run FreeBSD!

        3. 2

          The host you linked to says that: https://www.mythic-beasts.com/support/topics/fastcgi

          Our hosting accounts now use PHP-FPM for PHP scripts, which negates the need for FastCGI

          What they mean is that their hosting is setup to use php-fpm and thus users don’t need to setup fastcgi.

          As usual “hosting” companies are a weird mix of technologists and “how can we make this simple for people who don’t know”.

          Case in point: the sheer number of hosting companies that use the term “bandwidth” when they mean “transfer allowance”.

          As to your last question: personally I use/recommend https://httpd.apache.org/docs/2.4/mod/mod_proxy_fcgi.html which essentially allows you to setup fastcgi communication to php-fpm using the standard proxy config.

        4. 1

          Prove me wrong by finding me a web host that will run Python/Go/Rust FastCGI scripts – where I just drop in a file and perhaps configure .htaccess (like PHP)!

          I wouldn’t expect this for Go or Rust because these languages don’t benefit from FastCGI. Python would benefit, but since container orchestration exists, there isn’t much reason for anyone to build a platform for Python/FastCGI because they can build one for containers in general.

          Plausibly FastCGI hosts exist for PHP because PHP came into prominence at a time when FastCGI was the new hotness and container orchestration was not yet mainstream, rather than because of anything innate to the languages.

          1. 4

            Containers are a weird non sequitur.

            fcgi used to be reasonably common in the Python world, it was superseded by direct wsgi support some 20 years ago.

            1. 1

              Container orchestration isn’t a non sequitur; perhaps you misunderstood my argument. We wouldn’t expect to find many Python FastCGI web hosts precisely because container orchestration is a superior abstraction. Why would a web host target Python/FastCGI when they can target all languages via container orchestration for a small fraction of the effort? Thus the lack of Python FastCGI web hosts doesn’t indicate much of anything about the innate abilities of different languages with respect to FastCGI.

        5. [Comment removed by author]

        6. [Comment removed by author]

    5. 3

      I found a high quality host, Mythic Beasts, but they discourage FastCGI in their docs.

      As that doc page mentions, it’s only discouraged in so far as it’s difficult to get working: “You can use FastCGI for hosting dynamic content with Mythic Beasts shell accounts, but we recommend that you don’t unless it’s absolutely necessary, since doing so requires additional setup and can make debugging problems harder”.

      I’ve used their CGI (rather than FCGI) before, but since migrated to a close-in-spirit systemd/inetd-style/DynamicUser/Go, which was easier to debug, and easier to grab logs for.

      “Serverless” and “functions as a service” resemble CGI and FastCGI But they’re tied to proprietary clouds. They’re not interoperable and composable like Unix is.

      Is WASM WASI the new interopable and composable alternative to CGI? Do any shared hosts offer WASI serving?

      I’m trying out what already exists on the modern shared hosting services I found: Mythic Beasts - part of the Unix tradition in the UK!

      They’re a great host:

      1. 1

        Mythic Beasts support requests are handled by the engineers.

    6. 2

      I also really like cgi-bin. I would love for some modern equivalent that supports websockets, etc

      I found cloudflare pages to be a decent if unsatisfying approximation. Its only unsatisfying cos i cant reproduce it locally.

      Wrote a blog post on cf pages analogy https://taras.glek.net/post/cloudflare-pages-kind-of-amazing/

      spent today thinking how to reproduce something like cgibin with caddy and deno

      1. 1

        Oh very interesting, yes I was saying that if Github Pages supported cgi-bin then I might recommend that … it’s a data-oriented architecture.

        Looks like Cloudflare does better with TypeScript and their “v8 at the edge” products ..

        I still would like it to be Unix-y and support Python, PHP, shell, Rust, Go, or whatever. I think the issue is that they are architected around a v8 sandbox and not an OS sandbox.

        I don’t particularly care for the big opaque CDN in front of everything – because my small slice of a Dreamhost bare metal box has served 8 years of Hacker News spikes with ease – but yeah “sync file system with git or rsync” is definitely the interface that makes sense.

        1. 3

          I’m on DreamHost, moved from a shared host to a VPS, cgi-bin seems to work on both

    7. 2

      HTTP is the new FastCGI. Do shared hosts not support running persistent processes? If not, they should. File-based script hosting is dead, it’s not general enough and every app wants full control of when things happen.

      1. 2

        Do shared hosts not support running persistent processes? If not, they should.

        Given how many users are typically given accounts on a single host, this would be extremely wasteful for memory compared to CGI. Shared hosting (at least back in the day) was dramatically cheaper specifically because you could fit a lot more customers on a single machine.

        it’s not general enough

        It’s not general enough to replace a VPS, but it’s general enough for me. Just because it doesn’t solve every problem out there doesn’t mean it’s dead.

        1. 4

          Thanks to systemd socket activation is actually trivial to have a http server that starts on demand and shuts down after some period of inactivity, effectively not wasting any ram “just to sit there”. Sorry for the twitter link, but if someone wants to follow I posted description recently: https://x.com/dpc_pw/status/1797540651229438147

          1. 2

            The ability to do this was recently added to SvelteKit: https://kit.svelte.dev/docs/adapter-node#socket-activation

            This opens the way for hosting many SvelteKit apps on a single host with automatic shutdown and near instant startup.

            The author of the above feature has a project that packages up SvelteKit apps, a Node.js executable and glibc++ (The total size of the final image is approximately 37 MB). This is deployed as a systemd ‘Portable Service’: https://github.com/karimfromjordan/sveltekit-systemd

        2. 1

          Given how many users are typically given accounts on a single host, this would be extremely wasteful for memory compared to CGI

          Two things:

          1. this article is about FastCGI which has similar memory requirements to an embedded HTTP server.
          2. CGI is extremely wasteful for CPU and IO compared to persistent processes.
          1. 3

            Well, but keep in mind that there’s two “modes” of FastCGI:

            • Run a permanent process at a known address.
            • The webserver spawns semi-persistent processes at need, passes them an open socket, uses them as long as they seem busy and interested, then kills them at whim.

            Option 1 is the most well-known, because it makes sense for php-fpm. But IMO option 2 is the more interesting one, because it gets you the performance benefits of a persistent process while preserving a shared host’s ability to keep prices low by over-subscribing a server where many of the hosted sites and apps have very low-traffic.

            Personally, I want option 2 but for HTTP — call it a “reverse-proxy scheduler” model. Use config to mount a small http-speaking app at a particular path in your site’s URL hierarchy, and let the main web server spawn/kill it as it sees fit. This doesn’t exist, but IMO it should; and glancing around, it doesn’t seem like I’m the only one who wants this.

            1. 2

              The webserver spawns semi-persistent processes at need, passes them an open socket, uses them as long as they seem busy and interested, then kills them at whim.

              Isn’t that how Heroku worked? They would run your HTTP-speaking application in a container, which AFAIK would be killed off if idle for a while. Also I think a lot of these modern cloud offerings like AWS and Azure have something that works in a similar way. I think the main problem with these things is that it’s a bit heavier-weight and more expensive. Sounds very much like your “reverse-proxy scheduler”

            2. 1

              Yeah, as I said before here I have thought FastCGI should have been HTTP for over 25 years.

            3. 1

              Have you looked at Wagi? https://github.com/deislabs/wagi

              wasm aims to be the spiritual successor to some of the cgi functionalities mentioned in this thread.

      2. 2

        Yeah so I was going to “invent” AGI (Andy’s Gateway Interface) – which is just any HTTP server with FastCGI-style process management …

        AGI_LISTEN_PORT=9090  # server sets this, and the binary listens on that port 
        (could also be Unix domain socket path)
        AGI_CONCURRENCY=2  # for threads or goroutines perhaps
        

        The key points are

        • PHP-like deployment. I drop a file in and it runs …
        • Your process can be killed, made dormant, and restarted at any time. This is necessary for shared hosting to be economical. 99% of apps have like 100 hits or less a day. You don’t want to take up server memory keeping them alive. This is precisely the “cold start” issue in the cloud, which has a rationale

        This does not exist currently, but there’s no reason it shouldn’t!


        Most shared hosts don’t support persistent processes. Dreamhost doesn’t.

        Mythic Beasts does, but somewhat surprisingly it exposes systemd config files in your home dir:

        https://www.mythic-beasts.com/support/hosting/python (I haven’t set this up yet, but I will)

        OpalStack has some pretty weird advice for Go servers: https://community.opalstack.com/d/695-golang-in-opalstack

        Like basically using cron jobs to restart them … this seems like “non-support”.


        My guess that it creates more of a server management headache. But it is something that the cloud providers have solved, and it gives a better user experience.

        1. 3

          Mythic Beasts does, but somewhat surprisingly it exposes systemd config files in your home dir:

          Systemd actually provides most of a solution here. Use socket activation to start your daemon when a connection comes in and then your daemon can exit itself after some idle time. If another tcp connection comes in, Systemd will reuse the connection to the daemon while it still exists and start the daemon again if it has exited.

          1. 1

            Yeah I’m definitely going to kick the tires on this … Last week, I set up uswgi on OpalStack, and I also wanted to try the recommended Python thing on Mythic Beasts

        2. 1

          Don’t forget about scgi too.

    8. 2

      One question I had is what makes running CGI on a VPS a bad option? I can think of two possibilities, but there probably are others:

      • capacity: a shared host can better accommodate traffic spikes, assuming which users are experiencing a spike is independent
      • maintenance/security: you don’t want to self-host
      1. 4

        It’s mainly the maintenance/security – I want the PHP-like deployment model.

        I drop the executable on the server, and it runs for YEARS – 7 years in this case – with zero maintenance.

        It’s the cloud model, but it’s not a cloud – it’s actually more reliable IME.

        A Linux box, with Python, connected to a network, is a total commodity, and somebody else can do it better than I can.

        i.e. I don’t care about the web server version, the OS version, or SSL certificates

        I do have a Linode VPS, but the uptime of my shared hosting account is a lot higher than the uptime of my VPS …

        So basically I use Dreamhost for things I actually want to be up, like oilshell.org !

      2. 3

        Makes sense, but for anyone wanting to go for extremely low maintenance I want to highly recommend OpenBSD.

        The reason is that OpenBSD in the base install (so NO packages installed) comes with a HTTP server, a way to get a letsencrypt certificate with it, CGI and OpenSSH (obviously) which provides you with SFTP.

        The maintenance is:

        • syspatch to update (and it will send emails to whatever alias you set for root)
        • sysupgrade for single-command upgrades (caveat: There MIGHT be occasional additional stuff, such as removing old configs, but often enough it doesn’t really matter and you get commands to copy & paste)

        This is over-simplified because of things like backups, but that’s something where it’s way too common to think you don’t need them. This is essentially never true, not even for the most expensive cloud services. A big company lost all their data due to a mistake by Google.. A VPS might provide a backup solution.

        It even comes with tmux, etc. without any package. And having the opportunity to use an OS without packages is great. However you may install them of course which adds pkg_add -u into the maintenance process which will give you security updates unless you upgrade the OS when it will give you other updates as well.

        I am pragmatic enough to know, when I’d not use OpenBSD, but when I know I can get away with mostly the default install it brings so much peace of mind, because the maintenance goes down significantly and then having mails that inform you on security updates and if on a dedicated server, whether hardware or raid mirrors die, etc. is just so nice when it comes to maintenance.

    9. 2

      They’re not interoperable and composable like Unix is. I would like something more modular and Unix-y

      On a similar topic, does anyone have a good CI solution for fans of “modular/composable/Unix-y” stuff?

      Maybe… when a commit is pushed to your repo, trigger a CGI service which performs a git pull, builds, and triggers a notification if the build fails?

      Relatedly, is there any data/spec-oriented alternative to Github Actions or Buildkite where I can include a build file in my repo, and a builder CGI will know how to build that? make test? nix-build ci.nix?

      1. 1

        Doesn’t git have a way to handle on-push events on the remote? It takes a bit of manual setup but you could put your script in there.

        Alternatively, just run a cron job often enough to pick up your changes within your tolerance. If your build process doesn’t take a lot of resources when there are no changes, you could just run it every few minutes.

      2. 1

        I want to build this :)

        (related blog post linked in the appendix – a CI service is surprisingly similar to a distributed OS – it manages code/data/users across remote hardware)

        Oils has a dialect for it (which needs work) – Hay Ain’t YAML - https://lobste.rs/s/phqsxk/hay_ain_t_yaml_custom_languages_for_unix

        So basically no YAML CI configs.


        It would be based on our very heterogeneous and very thorough CI which has been running for several years on both sourcehut and Github Actions: http://op.oils.pub/

        I know there are many other “self-hosted CI solutions”. I’d be interested in feedback from people who use them.

        I guess the main difference is that it’s supposed to be modular/Unix-y – Unix composes in a language-oriented fashion.

        I don’t think YAML is a good enough language for reliable composition. It seems to be mostly a cut and paste language, where the cloud provider implements algorithms behind the scenes, and you can choose some of those “presets”.

        And you have to guess what the presets do. You have to fit your problem into that framework.

        It’s not really programmable, or general. My opinion is that a CI is a distributed shell script, and shell is completely general computation.

        There’s a bunch of other work in front of this, but feel free to join https://oilshell.zulipchat.com/ if interested!

    10. 2

      author says CGI is dead since 2015, but I say today may be resurrection day: Look at that single person social web (ActivityPub) server https://seppo.social being built as a CGI in OCaml.

      Thanks for the definition of the term, it’s wonderful.

    11. 1

      I started a long search for a new shared host, with shell access.

      The >= level 9 Hetzner webhosting plans have this: https://www.hetzner.com/webhosting/

      I ran my website on there for years as a cgi script without problems, it even survived being on the hacker news frontpage for a couple of hours without any issues.

      It was a Janet (lisp-like written in C) app, with fast startup times so I didn’t feel any “but it starts a whole new process per request” pain :)

      1. 3

        I should have emphasized that I really want Python support with a persistent process, like FastCGI. (ssh access seems pretty standard these days)

        If you look under Hetzner’s “developer features”, they have CGI Python, but they disclaim WSGI as an explicit NO. (This is super unusual because I’ve found that shared hosts tend to over-advertise support, not explicitly deny supporting things. But in this case I appreciate it)


        As mentioned, the two hosts I signed up to find Python support that’s more than CGI [1] are

        • Mythic Beasts - https://www.mythic-beasts.com/support/hosting/python - seems very good so far, but for some reason the latency from US East Coast to their European location is surprisingly large. It seems worse than transatlantic latency. Like 1+ seconds to SSH.

        • OpalStack has not just Python but Rails and node.js. A unique modern shared host - https://opalstack.com/

          • I’m using their uwsgi support right now, but I don’t like uswgi for the reasons stated in the post. It’s not Unix-y since the uwsgi server embeds a specific version of a Python interpreter.

        [1] the reason I don’t like CGI is because the Python interpreter itself starts slowly, which is a point I make in the post. Some pages on oilshell.org now have 50 ms of extra latency because of this. Of course the web is so slow that nobody complains about 50 ms anymore, but I care :-)

        1. 6

          I use NearlyFreeSpeech.Net for shared hosting, which added support for persistent processes about 10 years ago (though I have not exercised that support heavily). It supports FastCGI and there’s an example of setting up WSGI.

          1. 2

            Yeah I’ve used them before, and decided not to continue, but I should give it another shot.

            My issues were

            • I use Linux for everything, and they use FreeBSD with jails. (less of an issue now, since I’m more open to some diversity)
            • Their pricing is surprisingly cloud-like – pay as you go. It is indeed “nearly free” for tiny sites, but $1 per GiB-month plus other charges may be the most expensive option for our site, or at least it comes out to more than shared hosts. I’d rather pay monthly for such small amounts – that fact that it’s open ended makes me “think”.
            • Slow network disk – this was the main thing

            But I noticed that last year they did some upgrades:

            https://blog.nearlyfreespeech.net/2023/08/22/bigger-better-faster-more/

            It’s something of a sore point that our file storage performance has always been a bit lackluster. That’s largely because of the tremendous overhead in ensuring your data is incredibly safe. Switching from SATA SSDs to NVMe will give a healthy boost in that area. The drives are much faster, and the electrical path between a site and its data will be shorter and faster. And it’ll give all those Epyc PCIe lanes something to do.

        2. 2

          Heh, I have two comments now so I guess I’ll put them both here…

          My link log https://dotat.at/:/ is a cgi-bin script binary, currently written in Rust. When you hit that URL the program is started from scratch, loads 20,000+ links and spits out the subset specified in the request.

          It used to be a fastcgi perl script. I think the Rust version is faster even though it does more. It became fastcgi when the original perl cgi got too slow 🐌

          My web site is hosted on chiark which recently moved to Mythic Beasts in their Cambridge data centre. I think most of the Mythic Beasts VPS kit is in London (eg my primary DNS server) but the latency difference is only a millisecond or so.

          Dunno why your ssh performance would be that bad. It’s a very chatty protocol, tho, and those 50ms round trips add up fast. Maybe try ts ssh -vvv … to see what’s taking so long?

          1. 1

            Yeah somehow I had no awareness of Mythic Beasts, maybe because I’m in the US, but they definitely seem to be the most Unix-y host! Very unique

            And huh how did I not know about ts from moreutils? (I recently started using isutf8)

            ssh -v -v -v mb.oils.pub 'echo hi' 2>&1 | ts '%.T' > slow-ssh.txt
            

            Hopefully this doesn’t leak any details :-/

            http://www.oilshell.org/share/slow-ssh.txt

            First line is

            11:17:17.588469 OpenSSH_9.2p1 Debian-2+deb12u2, OpenSSL 3.0.11 19 Sep 2023

            11:17:19.202751 Bytes per second: sent 13052.1, received 12312.8 11:17:19.202809 debug1: Exit status 0

            So yeah it’s above 1.5 seconds, and it’s very repeatable …

            $ time ssh  mb.oils.pub 'echo hi' 
            hi
            
            real    0m1.615s
            user    0m0.114s
            sys     0m0.005s
            
            $ time ssh  mb.oils.pub 'echo hi' 
            hi
            
            real    0m1.582s
            user    0m0.109s
            sys     0m0.009s
            

            This is not a dealbreaker – I am keeping the account – but it does feel like a little more friction for throwaway stuff

            (also my ssh client is Debian bookworm, totally unmodified except for trivial ~/.ssh/config)

        3. 1

          Mythic Beasts - https://www.mythic-beasts.com/support/hosting/python - seems very good so far, but for some reason the latency from US East Coast to their European location is surprisingly large. It seems worse than transatlantic latency. Like 1+ seconds to SSH.

          Mythic Beasts have infrastructure in US West Coast where I rent a VPS. But from pinging their list of shared hosts, looks like they’re all in EU.

          They might be open to running a shared host in their US West Coast presence.

          1. 1

            Oh cool! I would love to use them in Fremont CA … (I think that is where my Linode has always been)

    12. 1

      @andyc You may want to take a look at Pair Networks as a possible shared hosting provider. This knowledge base article suggests that they still support FastCGI in a generic way, though the article is about how to set up PHP. And last time I did anything with them, which was as recently as a year or two ago, they were running FreeBSD, so hopefully they’re not affected by the Linux userspace churn that seems to have bitten DreamHost.

      Edit to add: Damn. New accounts are on Ubuntu servers My last exposure to Pair Networks was when helping out an organization with an old account. Still, I guess it’s possible they have generic working FastCGI even on Ubuntu.

      1. 1

        Yeah unfortunately I feel like Pair is in the exact same spot / same era as Dreamhost.

        They probably had decent FastCGI support at one point, but so few customers are using it that it “rotted” and they probably will not be responsive when things go wrong.

        I think demand was always low, since Rails and Django never supported FastCGI well. It was always an afterthought to them.

        That page sorta looks like the Dreamhost FastCGI pages – they mention FastCGI, but not in much detail. And it feels a bit PHP-specific.

        I got past the first level of support in Dreamhost, and it’s clear nobody there knows anything about FastCGI. They kept saying that I should start fewer FastCGI processes. But it’s mod_fcgid on THEIR end that does that, not me! I just drop the executable in there, and the server starts the processes.

        I think it broke with Dreamhost’s latest Ubuntu upgrade, and so few people are using non-PHP FastCGI that they are not going to fix it. That was pretty clear from the support responses.

    13. 1

      It feels like WASM could provide an attractive solution to this problem, if you’re willing to accept that you have to compile the WASM yourself.

      I’m not very well-versed in WASM but I guess you should be able to do something like this:

      • The host publishes a set of WebAssembly Interface Types that describes the “API” you have access to. Most likely this would be high-level functional a web app would need access to, e.g a database connection API, an API for making HTTP requests etc
      • The user builds a WASM binary that targets the given WITs
      • The user uploads the binary to the host and it’s reloaded instantly

      This avoids a lot of the problems with CGI, eg

      • Startup time is negligible because it’s WASM
      • You don’t need any process isolation as the WASM runtime doesn’t expose any sensitive interfaces by default

      The one drawback compared to CGI is that you have to compile your application, but that also means you aren’t dependent on what runtimes the host has available.

      (I guess this is kind of the idea behind https://wasmcloud.com/ ?)

    14. 1

      Related question: Has anyone tried to optimize process spawning on Linux to use in CGI? (like, using clone3 directly instead of the libc functions for more control, etc).

      1. 2

        Traditionally CGI programs have been scripts, so the startup time is dominated by interpreter overheads. If the CGI is a binary its startup time is dominated by dynamic linking (ELF interpreter overheads, heh) which you can reduce by static linking. If that isn’t fast enough it’s best to daemonize.