1. 92
  1.  

    1. 46

      The Google Domains thing is so confusing to me. The negative impact on how much people trust Google Cloud (even though Google Domains was, presumably, in another part of the company) surely must have dwarfed the amount of money they could save not continuing to run that service.

      Some day I’d love to hear the inside story of who was arguing for what and how the internal company politics played out. I bet there were some very loud voices arguing against what happened.

      1. 14

        In my opinion as someone who no longer works there but was close to at least one big shutdown:

        Google’s entire DNA and incentive structure is really about “10x new multi-billion user innovations”.

        My best guess is that a new exec came innand saw an opportunity to get some quick cash for their next promo project.

        I was nowhere near close enough to get the real inside story. Every coworker was stunned, I think it has recently bec profitable or something. Which is hilarious at this point. “Nobody could have seen this product cancellation coming, says employees of company famous for senseless product cancellatios”.

      2. 7

        This is what’s so shocking to me. The ROI / reputation hit avoidance seems so obvious. Like, how many engineers does it take to keep Google Domains up?

        Let’s assume it cost 3 million dollars per year to keep up. At companies I’ve worked for, that’s less than our yearly AWS spend. If GCP comes up as an alternative to lower that spend, I will absolutely speak out against it and explain why I think it’s a terrible idea. And I’m obviously not the only one.

        1. 5

          Like, how many engineers

          probably not an engineering issue tbh

          i dont know their internal reasons, but domain registrars are one of those very human problems. the staffing overhead for dealing with support and fraud/takedown/compliance/etc was probably not insubstantial

          1. 1

            Support people tend to cost way less. Regardless, I budgeted 3 million dollars - spend it how you like. Do you think you’d need more? Because IMO this is easily going to cost them > 3 million.

            1. 5

              I didn’t say it wasn’t an engineering problem - it is that as well. FWIW i dont disagree with the core conceit that they are making money on this product, as well. They were probably very revenue positive.

              For a registrar operating at Google’s scale? Dealing with international laws and legal challenges? I honestly think it’d be substantially more than $3m.

              You’d need product and infra engineering, a few on-call engineers (24 hours), a communications team (to deal with downtime without impacting the google brand), someone who’s familiar with icann and the various registrar related and dns related compliance/ecosystem, a few support engineers, first tier support (24 hours), not to mention a pretty decently staffed and competent legal team - especially one used to dealing with that niche. You’re going to be dealing with a LOT of subpoenas and takedown requests from many different jurisdictions.

              And that doesn’t include the attention and time required from senior/executive management overseeing the org that Google Domains is under.

              FWIW My back of the hand math comes to about $7million a year of staff costs. Not including redundant, HA infra. I think I’m being conservative with that, as well. If you told me the total staffing expenditure was north of $20m I wouldn’t even blink.

              My conservative estimate is that about 20-30 employees run the whole thing. A competing company (such as Namecheap) with about twice the amount of domains (and other services, to be honest), has about 1500 employees. A much smaller company (about half the domains of Google Domains) had about 50 employees.

              It’s clearly a revenue positive product, but may not be worth it to them. Which is the usual annoying Google attitude. But also, the reputation hit of such a complex and important product might not have been worth it either - I wouldn’t be surprised if the amount of risk involved was too high for them.

              1. 2

                I realize that larger companies have more to do but I feel like that’s a very bloated team you’re describing - like, why would this have to be its own team, for starters? It could be rolled up into a larger team, sharing the support staff and much of the product/infra engineers. Why would you need a dedicated legal team for this? No company operates that way afaik - “normal” lawyers at Google are going to do just fine. Hell, they could have even just stopped selling it entirely - radically cuts down on the overhead around sales, execs, etc.

                And then on top of that they earn money, and were even profiting. So idk I just can’t see the cost/benefit here.

                1. 2

                  It’s not bloated at all. It’s lean. I think you are strongly underestimating the amount of people required to run something this size and with these requirements.

                  It could be rolled up into a larger team, sharing the support staff and much of the product/infra engineers

                  Strongly doubt it, or if it was, they still had at least 20 dedicated staff. I don’t know their internal structure. Again, this is a team that serves hundreds of thousands if not millions of customers in dozens of countries.

                  It takes a LOT of DEDICATED people to run a 24/7 High Availability service that operates in many countries with millions of customers. This is a service, that if it goes down, will make international news. They could literally break the internet for a day.

                  I considered all of that, that’s why I suggested 20 instead of 50.

                  As for support staff, you may not need many first tier (you will need a fair amount of second tier), but you’re going to need dedicated technical support.

                  Why would you need a dedicated legal team for this? “normal” lawyers at Google are going to do just fine

                  It’s a bit about skillset, and a bit about workload and volume.

                  For the legal team, for the volume of requests you get as a registrar, you’re going to need/want dedicated people - especially those with understanding of intl law. At Google’s scale, they have someone who’s probably a liason for icann and other registrars - to handle issues and complexities with transfers and regulatory concerns. Some domains can’t be transferred to some companies, for example.

                  They probably dealt with dozens of law enforcement requests per day.

                  On top of that, considering how domains intersect with international politics, if you want to operate in all those countries that google did, you’re going to need at least two or three people dedicated just to compliance and regulatory concerns.

                  You’re going to need multiple on-call engineers JUST for domains - they had about 100million under their watch. I don’t know if you’ve ever staffed a truly 24/7 on-call rotation, but if you’re going to be responsible for any major customers’ domains, you’re going more people than you think (to account for time zones, illness, burnout, etc).

                  I said before, and I’ll say it again: this was clearly revenue positive for them. If I had to guess, it’s a headache and/or risk thing, or just the normal google bs.

                  1. 2

                    I guess I just disagree that it would take that much but that’s okay. It doesn’t really matter, it was still stupid of them to shut it down.

    2. 39

      They’re primarily an ad-driven company giving away services for free. But the paid for services are also treated as if they are given away for free.

      1. 25

        the paid for services are also treated as if they are given away for free.

        I can’t come up with any other explanation either.

        1. 20

          I haven’t worked at Google in a long time, so I don’t have a good insight as to what happened in recent years, but back around 2015ish there was a collision of several factors that lead to this kind of thing happening:

          1. Teams had a lot of initiative and encouragement to solve internal problems or create new products from the bottom-up, and would would start a fresh project or acquire a company to do so. This was much cooler to do than maintaining/upgrading an existing project and looked better for promo.
          2. Being localized initiatives, the teams solved their specific problem instead of company-wide problems. It was also much easier to work within your org rather than reach across to other orgs to coordinate more general solutions. Top-level directives would also favor this kind of “solve this specific problem quickly” mentality instead of broader consensus building. This led to 15 competing standards for everything.
          3. At some point, the “more wood behind fewer arrows” garbage collector would come and sunset products that weren’t deemed to be “the winner,” usually from a long-term view.
          4. Engineers became acclimatized to constant forced rewrites where the old working thing was deprecated, and the new thing wasn’t available yet (or didn’t support your use case yet).
          5. This mentality unfortunately spilled out from the internal-facing engineering culture to the external-facing product.

          Obviously this is a very simplified view, but certainly reflected some of the experiences I had.

          1. 8

            AWS has plenty of (2] as well, and they don’t clean it up, which is nice for backward compatibility, but results in confusion when you’re starting something new and are faced with multiple solution paths that look equally valid (but probably aren’t).

            As a random example, CodeBuild source triggers do almost exactly the same thing as CodePipeline source triggers, but the details are gratuitously different (e.g., both have file path filters, but one uses globs, the other regular expressions). Somebody at Google would have figured out there should be only one thing.

            Also check out their repeated, often short-lived, attempts to make a high-level application deployment service (Elastic Beanstalk, OpsWorks, CodeCatalyst, AppRunner…). Every few years somebody at AWS notices that to new users deploying an app with the raw services is like assembling the Lego Millenium Falcon with no instructions, and develops something new to “hide the complexity” by introducing a new set of concepts incompatible with the last attempt. And these things do get deprecated and shut down.

      1. 1

        Yegge’s takes are so great. I wish he’d write more of them.

        1. 3

          Sometimes the trick to high quality is to avoid high volume.

    3. 56

      This problem extends way beyond Google Cloud. It’s hard to recommend any Google product given the rate at which they sunset them.

      https://killedbygoogle.com/

      The arrogance involved is astounding too. I remember them justifying turning off their RSS feed for some or other product because they’d just killed Reader o_O

      1. 16

        Can’t agree more. I used to use Google Podcasts as well as Google Play Music heavily. I had to write my own Android app to replace that.

        However, for the paid products like Google Cloud, I expected that the treatment would have been nicer.

        1. 9

          Personal experience: it really isn’t.

        2. 4

          It’s not. The breaking changes we were hit by in GCP have been just abrupt and difficult as with their consumer products.

      2. 22

        They’ve just started the clock on fucking over anybody that uses the Google Photos API to access their own data: https://developers.google.com/photos/support/updates

        Can’t wait for them to take access to the Gmail API away as well. Google is trash.

      3. 5

        I’ve been a google workspace admin for 10 years now, and it’s been a pretty stable product. I wonder how long it’s got until they ruin it…

      4. 4

        I don’t think this is solid reasoning. Google’s enterprise products are treated very differently from consumer products. I understand the destruction of goodwill in one area spills over into another area. But the Cloud sunsetting policy is very “enterprise-y”. The lead times are at least a year, and I only recall them sunsetting features or small products, with defined migration paths (ie: Pub/Sub Lite -> Managed Service for Kafka).

        1. 12

          In the multiple times I’ve been hit by GCP breaking changes the upgrade path has been a lot of work, or to a significantly more expensive product.

        2. 9

          Just because there’s an “upgrade path” doesn’t mean there won’t be a lot of work on your end to do it.

          1. 4

            Indeed. Changing domain registrar isn’t hard but it is still work that one has to do carefully.

    4. 14

      Their pricing doesn’t seem that dependable either. In 2025 their alerts will go from costing $0 to $1.5 per policy (source).

      A friend works at a company that’s gotta undergo a rewrite of how their alerts are set up, else this change would have their monthly GCP bill increase by thousands of dollars. If GCP has forced you into this kind of rewrite position, why not rewrite onto a more reliable provider? I’m surprised at how okay Google seems to be with giving customers opportunities to churn like this.

      1. 4

        Yeah, I got that email as well a few months back. The impact is not as drastic for me but you can never trust Google Cloud to maintain any of its pricing structures.

    5. 11

      They really screwed themselves with the domain one. There was some mixed messages at the time too about whether it would affect Google Cloud Domain users, which I imagine reflected real confusion within Google’s management.

    6. 9

      It is hard to recommend [anything but AWS]

      sad but true for large cloud providers

      more niche cloud providers like cloudflare and fly.io and the like are making inroads, though

      1. 10

        AWS is reliable but too coarse compared to the Google Cloud Platform The engineering productivity cost is much higher with AWS than with GCP.

        1. 22

          With the understanding that I might just have been lucky, and / or working for a company seen as a more valuable customer … in a previous role I was a director of engineering for a (by Australian standards) large tech company. We had two teams run into difficulties, one with AWS, and another with GCP.

          • One team reached out to Amazon, and our AWS TAM wound up giving us some great advice around how we’d be better off using different AWS tech, and put us in touch with another AWS person who provided some deeper technical guidance. We wound up saving a tonne of money on our AWS bill with the new implementation, and delighting our internal customers.

          • The other team reached out to Google, and Google wouldn’t let a human speak to us until we could prove our advertising spend, and as it wasn’t high enough, we never got a straight answer to our question.

          Working with Amazon feels like working with a company that actually values its customers; working with Google feels like working with a company that would prefer to deal with its customers solely through APIs and black-box policies.

          1. 5

            working with Google feels like working with a company that would prefer to deal with its customers solely through APIs and black-box policies.

            Indeed. If you think you will need a human to get guidance, then GCP is an inferior option by a huge margin.

            1. 5

              Where in my experience, guidance can include “why is your product behaving in this manner?”.

        2. [Comment removed by author]

        3. 1

          What does “too coarse” mean?

          I’m not sure I believe you that the “engineering productivity cost” is higher with AWS: How exactly did you measure that?

          1. 5

            How exactly did you measure that?

            I have a Docker image (Python/Go/static HTML) that I want to deploy as a web server or a web service.

            1. How long will it take to setup a webservice?
            2. How much extract work for SSL cert for “.”?
            3. How much extra work to configure env vars
            4. How much extra work to store API keys/secrets in the secret manager

            GCP is far superior than AWS on this measure.

            1. 3

              I’m still confused. What do you have to do in AWS that you don’t have to do in GCP or is significantly easier? I have very little experience with GCP and a lot with AWS, so interested to learn more about how GCP compares.

              1. 3

                I have a good enough experience with both platforms so this is something you have to try and you will see the difference.

                GCP primitives are different and much better than AWS.

                1. 6

                  “Trust me bro” is a difficult sell; but I am here to back up the sentiment. There are simply too many little things that by themselves are not deal breakers at all - but make the experience much more frustrating.

                  GCP feels like if engineers made a cloud platform for themselves with nearly infinite time and budget (and some idiots tried to bolt crud on the side, like oracle, vmware and various AI stuff).

                  AWS feels like if engineers were forced to write adjacently workable services without talking to each other and on an obscenely tight deadline - by people who didnt really know what they wanted, and then accidentally made something everyone started to use.

                  I’m going yo write my own blog post about this, I used to have a list of all my minor frustrations so I could flesh it out.

                  1. 3

                    I’m going yo write my own blog post about this, I used to have a list of all my minor frustrations so I could flesh it out.

                    Thanks, I would love to link it.

                    1. 3

                      I’m still working on it, there’s some missing points and I want to bring up a major point about cloud being good for prototyping, but here: https://blog.dijit.sh/gcp-the-only-good-cloud/

                      1. 1

                        Thanks for sharing.

              2. 1

                If it helps I have the same professional experience with AWS and GCP. GCP is easier to work with but an unreliable foundation.

            2. 2

              Correct me if I’m wrong but you can do all of the above with Elastic Beanstalk yes? Maybe ECS as well?

              The trickiest part would be using AWS Secrets Manager to store/fetch the keys, which has a friendly enough UI through their web, or CLI.

              You can definitely do all of this with EKS easily, but that requires k8s knowledge which is a whole other bag of worms.

            3. 2

              The thing you are describing I should have only needed to learn once for each cloud vendor and put in a script: The developer-time should amortise to basically zero for either platform as long as the cloud vendor doesn’t change anything.

              GCP however, likes to change things, so…

              1. 1

                The developer-time should amortise to basically zero for either platform as long as the cloud vendor doesn’t change anything.

                Yes and no. Some constructs you don’t even have to worry about on GCP, so, it is best for fast deployments. However, if you spend 500+ hours a year doing cloud work, then your point stands.

                1. 2

                  500 hours a year is only about five hours a week.

                  My application has run for over ten years. It paid for my house.

                  You’re damn right my point stands.

                  1. 1

                    Indeed, if you are doing 5 hours a week of cloud infra work, your application is likely a full-time job or equivalent. I do believe you made the right choice with AWS.

                    1. 1

                      your application is likely a full-time job

                      O_o
                      

                      Five hours a week is a full-time job?

                      You’ve got some strange measures friend…

                      1. 2

                        read the post you replied to again, it does not say that.

                      2. 2

                        Five hours a week is a full-time job?

                        No. If you are spending 5 hours a week on infrastructure, then your application would be worth spending 40-hours on. Or is infra the only component of your application?

            4. 1

              Can you do an actual comparison of the work? I’d be curious. Setting up a web service from a Docker image is a few minutes. Env vars are no extra work. Secrets are trivial. A cert would take maybe 10 minutes?

              Altogether, someone new should be able to do this in maybe 30 minutes to an hour. Someone who’s done it before could likely get it done in 10 minutes or less, some of that being downtime while you wait for provisioning.

              1. 0

                I have done it for myself and I’m speaking from experience.

                1. 11

                  You’ve posted a lot in this thread to say, roughly ‘they’re different, AWS is better, but I can’t tell you why, trust me’. This doesn’t add much to the discussion. It would help readers (especially people like me, who haven’t done this on either platform) if yo u could slow down a bit and explain the steps in AWS and the steps in GCP and why there’s more friction with the latter.

                  1. 2

                    No, please don’t trust me. Here’s an exercise for yourself.

                    Assuming you control a domain domain1.com.

                    Deploy a dockerized web service (pull one from the Internet if you don’t know how to write one). Deploy it on gcp1.domain1.com, Deploy another on aws1.domain1.com. Compare how many steps it takes and how long it takes.

                    Here’s how I deploy it on GCP. I never open-sourced my AWS setup but I am happy to see a faster one.

                    1. 17

                      As I said, I have not used AWS or GCP so I have no idea how many steps it requires on either platform. I can’t judge whether your command is best practice for CGP and have no idea what the equivalent is on AWS (I did once do this on Azure and the final deploy step looked similar, from what I recall, but there was some setup to create the Azure Container thingy instance, but I can’t tell from your example if this is not needed on GCP of if you’ve simply done it already). If I tried to do it on AWS, I’d have no idea if I were doing the most efficient thing or some stupid thing that most novices would manage to avoid.

                      You have, apparently, done it on both. You are in a position to tell me what steps are needed on AWS but not on GCP. Or what abstractions are missing on AWS but are better on GCP. Someone from AWS might even read your message and improve AWS. But at the moment I just see that a thing on GCP is one visible step plus at least zero setup steps, whereas on AWS it is at least one step.

                      You’ve posted ten times so far in this story to say that GCP is better, but not articulated how or why it’s better. As someone with no experience with either, nothing you have said gives me any information to make that comparison. At least one of the following is true:

                      • GCP has more efficient flows than AWS.
                      • You are more familiar with GCP than AWS and so you are comparing an efficient flow with GCP that you’ve learned to use to an inefficient flow with AWS.
                      • You are paid by GCP to market for them (not very likely).

                      It sounds as if you believe the first is true and the second two are not but (again) as someone reading your posts who understands the problem but is not familiar with either of the alternatives in any useful level of detail, I cannot judge for myself from your posts.

                      If you wrote ‘GCP has this set of flows / abstractions that have no equivalent on AWS’ then someone familiar with AWS could say ‘actually, it has this, which is as good’ or ‘you are correct, this is missing and it’s annoying’. But when you write:

                      I have done it for myself and I’m speaking from experience.

                      That doesn’t help anyone reading the thread understand what is better or why.

                      1. 2

                        Given my experience with AWS, my vote is #2.

                        I find AWS to be better organized and documented but much larger that any Google product I’ve seen. There is more to learn because it’s a large, modular system. And it’s trivial to do an easy things the hard way.

                        I don’t have much direct experience with Google’s hosting but if any of their service products are similar, the only advantage is they do less, which means you can skimp on organization and documentation without too many problems.

                    2. [Comment removed by author]

      2. 5

        IMO, it’s hard to recommend AWS as well.

        1. 1

          So what do you recommend then? 🙂

          1. 5

            In terms of big players I would recommend GCP still, but only because I mostly work with Kubernetes and it’s best there. From smaller players Fly.io is actually works well for me.

            1. 1

              I mostly work with Kubernetes and it’s best there.

              Why is it “best” there? I use EKS on aws and have had no issues with it…?

              1. 1

                in Kubernetes territory, Google is the elder. Also cleaner and more intuitive UI.

                1. 1

                  in Kubernetes territory, Google is the elder.

                  not shocking. aren’t they the original contributors?

                  Also cleaner and more intuitive UI.

                  I don’t touch EKS’s ui much/ever so I honestly don’t really care about that. Usually use aws via terraform/pulumi.

    7. 9

      I’ve used it in various high stakes professional full time jobs for going on 6 years and it has been a constant source of frustration I haven’t seen with AWS.

      I generally agree that some of the general architecture is nice, but it simultaneously feels like they needlessly reinvent things, keep features gated behind alpha/production labels forever, and the support is downright terrible. On the support, you tend to pay a LOT of money for this, and the support organization has mostly been near-shored and is unfortunately comically bad.

      Adjacent, they have some really bright sales/solutions engineers but this group often seem at odds with the product teams and it can feel messy being in the middle. It often feels like I work AT google when working with GCP, dealing with all the drama and unwanted sociology of the company which in effect means they are terrible at B2B.

    8. 5

      I am wondering what prevents Google CLoud for automatically migrate all images from GCR to Artifact Registry? We also spent some significant time, but it was mostly a routine.

      1. 5

        Exactly. They could have just changed the underlying infrastructure. However, I believe they didn’t want to give the old pricing of GCR.

    9. 5

      Google pulling this kind of crap has been known for years now. How come people still build stuff on top of Google’s cloud? “Because it’s kind of nice”?

      1. 2

        GCP is not “kind of nice”. It is genuinely better than AWS, for example, the engineering maintenance costs and ongoing costs are 10X lower in most cases due to better primitives.

        1. 4

          Do you have some examples? All the Google stuff I’ve worked with so far is mediocre at best.

          1. 2

            Google Cloud Run is one of the best Docker-based deployment systems in the market. It is far superior to AWS Fargate and Azure Containers.

            1. 4

              In what way is it superior? From the docs, I can’t tell what’s so great about it (except that it has a big free tier). It looks pretty similar to, say, Heroku back in the day perhaps with more annoying web UI navigation required.

              1. 2

                Heroku back in the day perhaps with more annoying web UI navigation required.

                Try deploying a web service on Fargate and Cloud Run and you will see the difference. The fact that you are stuck on the UI where there is an amazing CLI itself tells me that you are taking a superficial take here.

                1. 2

                  Have you seen this? https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-configuration.html

                  You do not need a UI and using the CLI for eb is trivial.

                  1. 1

                    See the step 2 here https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-configuration.html In GCP, you do gcloud auth login and it takes you to the web browser for SSO. There are several such paper cuts like this with AWS. Again, once you are used it, they don’t matter. But the GCP setup is much smoother and simpler for a neutral observer.

                  2. 0

                    Yes the CLI for GCP is still better. It requires less work.

    10. 4

      That seems like a rather slim argument to support the title. GCP’s main attraction for $WORK is: we get a lot more compared to what we would get from AWS, for half the price. We’ve been with them for 10 years and I’m pretty sure that the eventual GCR shutdown is the first deprecation within GCP that’s had any effect on us at all, and while it’s annoying, it’s far from the most annoying thing I’ve dealt with this week. We just created some repos on GAR, changed URLs in relevant places, and updated a couple scripts to add the new domains to the list of things that get Google’s docker credential helper.

      The higher pricing is pretty predictable from their process over the past year of restructuring storage pricing to take into account the cost of cross-region transfers. It’s not actually 10x as expensive. Looks like slightly less than 4x. But either way container images are a minuscule fraction of our cloud expense. And GAR does actually have useful new features (like remote repositories, IAM that’s actually for GAR instead of being tied to GCS, and the new resource types that mean not having to deal with Artifactory anymore).

      1. 1

        we get a lot more compared to what we would get from AWS, for half the price.

        That I can agree for sure. I just wish I knew that I won’t have to migrate projects over time. Work you not impacted at all by Google Domains changes?

        And GAR does actually have useful new features

        I have no problem with GAR. My objection is with GCP killing GCR.

    11. 3

      GCP is an amazing product ruined by terrible management and policy.

      Just look at how many startups they’ve maimed by randomly dropping quotas.

      They’re like the stripe of cloud providers (randomly cutting off customers without explanation)

    12. 2

      I like that Google Cloud is easy to use, and has some nice modern features like OIDC authentication that are just newer than AWS’ method. Some very common things on AWS require writing custom Lambda functions which is error-prone and annoying, and GCP has that stuff built in. The Kubernetes support is top-notch, and the observability tooling is really good. It’s also substantially cheaper than AWS for most things.

      On the other hand, Google Cloud is REALLY rough around the edges. For example Cloud SQL will lose connections during its maintenance window. On AWS, VPC’s are a first-class networking concept – on Google they support the bare minimum of functionality. When you get into more esoteric stuff like key management, Google has almost no support or documentation, but AWS has a rich set of functionality.

      If you’re doing a startup, Google Cloud is a decent choice because it’s cheaper and you can work around the issues as you build. But for an established company I’d almost always recommend AWS.

    13. 2

      If one is fairly certain that one likes uncertainty, then using Google is a very good option. However, if one desires to have one’s website around longer than 18 months than one should probably run away from Google.

      1. 1

        The entire online subsystem for The Division and The Division 2 is running on GCP and was launched circa 2016- with no major rewrites once launched.

        fwiw.

    14. 2

      I can’t wait to move my Google Workspace for personal email away from Google. $7 a month for an email alias is kind of expensive for me. Anybody can suggest an email hosting platform? I don’t want to DIY to host the email server.

      1. 2

        Anybody can suggest an email hosting platform?

        We have discussed this several times including here and here.

      2. 2

        Anybody can suggest an email hosting platform?

        https://purelymail.com/

        Their web site makes it sound like the best thing about it is that it’s cheap, and it is cheap, but IMO the best thing about it is that it does exactly what I want with no fuss.

      3. 1

        I’ve used migadu.com for many happy years now. It’s reliable and cheap. I’m sure there are plenty of other good ones too.

    15. 2

      Seeing this post and the comments made me start to question my choice of using a cheap VPS and managing everything myself…. I went to ChatGPT o1-preview asking “fly.io vs vultr”, then I googled “aws vs digitalocean” skimmed through a bunch of Hacker News posts. Ultimately, I was unsatisfied with what I saw, so I appended “site:lobste.rs” to my google query and found this lobste.rs post (archived article) which made me stop questioning my choice. Sure, I have to manually migrate Postgres, manually update FreeBSD, setup load balancers myself. Which is kind of a pain. But my SSL cert upgrade method is a 10-line bash script in a cron job instead of this monstrosity

      1. 1

        But my SSL cert upgrade method is a 10-line bash script in a cron job instead of this monstrosity

        Why use GKE? Google Cloud Run has automated this away. I am always amused with these two extreme view points that “Cloud = lambda functions” or “Cloud = Kubernetes”. Managed docker (AWS Fargate, Google Cloud Run, Azure App Service) is the right balance between portability and engineering effort for most side projects.

        1. 1

          Thanks for the info. This is indeed a compelling usecase for big cloud. Personally I use QEMU instead of Docker, which is also very portable and low-effort (probably a bit more portable and a bit more effort than Docker) maybe it’s in my best interest to target Docker as well. The main thing that gives me pause is the need to use S3/GCS, which 1. adds complexity to the codebase, 2. is not portable, 3. costs $20/tb/mo for the standard tier, which is 4x as much as buyvm’s offering. But maybe the tradeoffs are worth it

          1. 1

            Personally I use QEMU instead of Docker, which is also very portable and low-effort (probably a bit more portable and a bit more effort than Docker)

            I can’t recommend Docker enough. It is the best packaging system that has come around in the past decade, here’s a basic tutorial. The best thing about Docker is that it forces you to separate code, persistent data, cache, secrets, and environment into separate chunks. So, more extra effort upfront but a much more maintainable project over the long run.

            1. 1

              Ya I am very familiar with Docker, I prefer QEMU. The reason why I am compelled by Docker now is because it’s a first class citizen on the aforementioned cloud providers

    16. 2

      At my current role, before I arrived they had just finished migrating from AWS to GCP, and I’m not really sure why

    17. 0

      superior product

      I think that we have different definitions of superior product.

      1. 0

        Mine is superior developer experience and thoughtful primitives ( e.g. logging into gcloud CLI is a breeze, logging into AWS CLI is a PITA) What’s yours?