> There is no technical moat in this field, and so OpenAI is the epicenter of an investment bubble. Thus, effectively, OpenAI is to this decade’s generative-AI revolution what Netscape was to the 1990s’ internet revolution. The revolution is real, but it’s ultimately going to be a commodity technology layer, not the foundation of a defensible proprietary moat. In 1995 investors mistakenly thought investing in Netscape was a way to bet on the future of the open internet and the World Wide Web in particular.
OpenAI has a short-ish window of opportunity to figure out how to build a moat.
"Trying to spend more" is not a moat, because the largest US and Chinese tech companies can always outspend OpenAI.
The clock is ticking.
bruce511 2 days ago [-]
There is no technical moat, but that doesn't mean there isn't a moat.
Amazon might be a good analogy here. I'm old enough to remember when Amazon absorbed billions of VC money, making losses year over year. Every day there was some new article about how insane it was.
There's no technical moat around online sales. And lots of companies sell online. But Amazon is still the biggest (by a long long way) (at least in the US). Their "moat" is in public minds here.
Google is a similar story. As is Facebook. Yes the details change, but rhe basic path is well trodden. Uber? Well the juries still out there.
Will OpenAI be the next Amazon? Will it be the next IBM? We don't know, but people are pouring billions in to find out.
saurik 2 days ago [-]
A couple other comments touch on the point I want to make, but I feel they don't nail it hard enough: if today you told me you had a website that was 100% technologically identical to Amazon, but you were willing to make slightly less money, you'd still have no products... as a user I'd have no reason to go there, but then without users there is no reason to sell there, so you have a circular bootstrapping issue. That is a moat.
This is very different from OpenAI: if you show me a product that works just as well as ChatGPT but costs less--or which costs the same but works a bit better--I would use it immediately. Hell: people on this website routinely talk about using multiple such services and debate which one is better for various purposes. They kind of want to try to make a moat out of their tools feature, but that is off the path of how most users use the product, and so isn't a useful defense yet.
cm2187 2 days ago [-]
Amazon as in AWS is possibly a better analogy. What AWS sells is mostly commodity. Their moat is the cost of integration. Once a company has developed all its system around AWS, they are locked in, just because of the integration cost of switching. OpenAI only sells a single component, so the lock in is weaker. But once you have a business a company depends on which has been tested and relies on OpenAI, I can see them thinking twice about switching. Right now, I doubt we are near that stage, so my vote is on no moat yet.
dartos 2 days ago [-]
Many cloud native companies have realized they his and are actually moving off of public clouds.
It’s slow and painful, but the expense is driving some customers away.
esafak 1 days ago [-]
I think any decent company would be using LLMs through an interface, so they can swap out providers, same as any API.
cm2187 1 days ago [-]
That's fair, but different LLM behave differently, so you would have to redo your testing from scratch if you were to swap the model. I think that would be the primary problem.
esafak 1 days ago [-]
Testing for LLMs is an evolving practice but you need to have tests even if you stick with one provider, otherwise you won't be able to swap out models safely within that provider either.
sdesol 2 days ago [-]
> if today you told me you had a website that was 100% technologically identical to Amazon, but you were willing to make slightly less money, you'd still have no products
It is widely understood that you really can't compete with "as good as". People won't leave Google, Facebook, etc. if you can only provide a service as good as, because the effort required to move would not be worth it.
> if you show me a product that works just as well as ChatGPT but costs less--or which costs the same but works a bit better--I would use it
This is why I believe LLMs will become a commodity product. The friction to leave one service is nowhere near as as great as leaving a social network. LLMs in their current state are very much interchangeable. OpenAI will need a technological breakthrough in reliability and/or cost to get people to realize that leaving OpenAI makes no sense.
lupusreal 2 days ago [-]
> It is widely understood that you really can't compete with "as good as".
Sure you can, that's why there are hundreds if not thousands of brands of gas station. The companies you list are unusual exceptions, not the way things usually work.
terminalgravity 2 days ago [-]
Im not sure a gas station analogy really works. I use a gas station out of convenience (i.e. its on my route) and will only go out of my way for a significant difference in price. This means i go to the same gas stations even when there are others that are “as good as” around just because it’s convenient for me. Similarly if i already setup an account with Amazon and currently use Amazon i won’t move to an “as good as” competitor just because its an inconvenience to setup a new account, add billing info, add my address, etc… for no real improvement.
bawolff 2 days ago [-]
Amazon (the retail business not AWS) does seem to have a pretty big moat.
For starters they have delivery down. In major cities you can get stuff delivered in hours. That is crazy and hard to replicate.
They have a huge inventory/marketplace. Basically any product is available. That is very difficult to replicate.
dartos 2 days ago [-]
They have their own fleet of planes for shipping.
Amazon’s vertical integration is their mote.
bruce511 2 days ago [-]
I feel like you (and others) are saying what I'm saying.
I said;
>> There is no technical moat, but that doesn't mean there isn't a moat.
Meaning that just because the moat is not technical doesn't mean it doesn't exist.
Clearly Amazon, Google, Facebook etc have moats, but they are not "better software". They found other things to act as the moat (distribution, branding, network effects).
OpenAI will need to find a different moat than just software. And I agree with all the people in this part if the thread driving that point home.
swiftcoder 2 days ago [-]
Moats don't have to be software. Amazon's physical distribution chain is absolutely a moat - trying to replicate their efficiency at marshalling physical items from A to B is a daunting problem for new entrants in the online retail game.
pixelsort 2 days ago [-]
They have been monitoring their GPT Store for emergent killer applications with a user base worth targeting. Zuckerberg's playbook. Nothing yet, because they've been too short-sighted to implement custom tokens and unbounded UIs.
svnt 2 days ago [-]
Amazon functions as a marketplace dynamic, which is defensible if done right, as they have shown.
OpenAI right now some novel combination of a worker bee and queryable encyclopedia. If they are trying to make a marketplace argument for this, it would be based on their data sources, which may have a similar sort of first-mover advantage as a marketplace as they get closed off and become more expensive (see eg reddit api twitter api changes), except that much of those data age out, in a way that sellers and buyers in a marketplace do not.
The other big difference with a marketplace is constrained attention on the sell/fulfillment side. Data brokers do not have this constraint — data is easier to self-distribute and infinitely replicable.
htrp 2 days ago [-]
>you had a website that was 100% technologically identical to Amazon, but you were willing to make slightly less money
You've basically described Temu
serjester 2 days ago [-]
Look at Android vs Apple phones in the US. HN tends to really underestimate the impact of branding and product experience. It’s hard to argue anyone’s close to OpenAI on that front (currently).
Not to mention they’ve created a pretty formidable enterprise sales function. That’s the real money maker long term and outside of Google, it’s hard to imagine any of the current players outcompeting OpenAI.
bsimpson 2 days ago [-]
Dropbox and Slack are examples of another possible outcome: capture the early adopters and stay a player in the space, but still have your lunch eaten by big tech suites that ship similar products.
tw1984 2 days ago [-]
this is probably how OpenAI is going to end up with. there is no clear tech barrier that can't be crossed by competitors. openai came up with all sorts of cool stuff, but almost all of them have seen a peer competitor in just months.
the recent release of deepseek v3 is a good example, o1 level model trained under 6 million USD, it pretty much beat openai by a large margin.
stavros 2 days ago [-]
Where are you getting that Deepseek is at the level of o1? In my experience, it's not even as good as Claude.
esafak 1 days ago [-]
Even v3?
edit: Fair enough. I'm fishing for opinions too.
stavros 1 days ago [-]
My testing has been very limited, so I don't want to opine too much. If anyone has a differing opinion based on more testing, please share, I'm interested!
tomjuggler 1 days ago [-]
So I've been using Deepseek for 3 months with Aider coding assistant. Look up the "Aider LLM leaderboard" for proper test results if you like, in my experience the Deepseek V3 is just as good as Claude at less than 1/10th the price. I can't speak about o1 it is just too expensive to be worth it
OpenAI is going to be beaten on price, wait and see.
stavros 1 days ago [-]
Did Aider integrate v3 in its benchmarks? I checked yesterday and I didn't see it...
EDIT: Oh it did, wow, and it's better than Claude! Fantastic, this is great news, thank you!
sirspacey 15 hours ago [-]
That’s how it is today, but that was also the case at the birth of e-commerce. Amazon was a large eCom store but many others were successful and switching for price was common. Not so much anymore.
Defaults are powerful over time.
OJFord 2 days ago [-]
I agree it's a bit weaker, but you're still paying $20 + tax for ChatGPT on a monthly subscription (or more). You could switch next month, you might regret it and switch back the month after. You might anticipate that faff and not switch to begin with.
(Sure you might say I'll subscribe to both, $20, $40, it's no big deal - but the masses won't, people already agonise over and share (I do too!) video streaming services which are typically cheaper.)
billylb42 2 days ago [-]
Amazon retail is a marketplace with marketplace dynamics which what you are describing. They are connecting buyers and sellers. OpenAI is a SaaS company, can’t compare them.
More interestingly to your thread is how does Craigslist supplant print classifieds, which then is challenged if not supplanted by Facebook Marketplace. Both the incumbents had significantly better marketplace dynamics prior to being overtaken.
nothercastle 1 days ago [-]
If Walmart made a slightly better website you wouldn’t shop there? Cause that’s really all that’s holding me back
ascorbic 2 days ago [-]
Why would they have no products? While Amazon Marketplace is important, they'd still have the greatest selection of products if they only had first-party sales. It's a bad analogy. eBay is a better example, as a purely a marketplace.
mnky9800n 2 days ago [-]
Perplexity costs less and lets you use more models.
guerrilla 2 days ago [-]
Did we forget its called the network effect?
netdevphoenix 2 days ago [-]
Missing the point though. Amazon isn't Amazon because of its tech (speed, reliability, etc) doesn't matter as much as: inventory (tons of things you can only find there), delivery speed (you can reliably get 99% of the things in a week or less and some of the items get delivered in HOURS) and customer service (you are right by default, you will refunded and get free delivery if you encounter issues). That's the ultimate killer. If a competitor managed to do this, a little marketing to get installed in the customer base's phones will definitely eat Amazon's lunch. Temu has shown big strides but its ethical problems will prevent them from becoming a true threat. A local Temu-like competitor would be a formidable adversary
htrp 2 days ago [-]
> Temu has shown big strides but its ethical problems will prevent them from becoming a true threat.
Does the average Temu user care about the company's ethical problems? Does the average Amazon user?
cyanydeez 1 days ago [-]
You spelled monopoly weirdly.
jsjohnst 1 days ago [-]
> Amazon might be a good analogy here. I'm old enough to remember when Amazon absorbed billions of VC money, making losses year over year.
Apparently old enough to forget the details. I highly recommend refreshing your memory on the topic so you don’t sound so foolish.
1. Amazon had a very minimal amount of VC funding (less than $100M, pretty sure less than $10M)
2. They IPO’d in 1997 and that still brought in less than $100M to them.
3. They intentionally kept the company at near profitability, instead deciding to invest it into growth for the first 4-5yrs as a public company. It’s not that they were literally burning money like Uber.
4. As further proof they could’ve been profitable sooner if they wanted, they intentionally showed a profit in ~2001 following the dotcom crash.
Edit: seems the only VC investment pre-IPO was KP’s $8M. Combine that with the seed round they raised from individual investors and that comes in under $10M like I remembered.
morgante 2 days ago [-]
Amazon has scale economics, branding power, and process power. It would take years to fully rebuild Amazon from scratch even given unlimited money.
Right now, OpenAI's brand is actually probably its strongest "moat" and that is probably only there because Google fumbled Bard so badly.
genman 2 days ago [-]
This shows that it's not that easy to get the AI right even with the sizable funding available. OpenAIs moat is its actual capacity to provide and develop better solutions.
ripped_britches 2 days ago [-]
This is an odd thing to say.
Facebook has an enormous network effect.
Google is the lynchpin of brokering digital ads to the point of being a monopoly.
Someone else mentioned Amazons massive distribution network.
chii 2 days ago [-]
> Their "moat" is in public minds here.
no, the amazon moat is scale, and efficiency, which leads to network effect. The chinese competitors are reaching similar scales, so the moat isn't insurmountable - just not for the average mom and dad business.
dewitt 2 days ago [-]
> “Amazon might be a good analogy here. I'm old enough to remember when Amazon absorbed billions of VC money, making losses year over year. Every day there was some new article about how insane it was.”
Not to take away from the rest of your points, but I thought Amazon only raised $8m in 1995 before their IPO in 1997. Very little venture capital by today’s standard.
sangnoir 2 days ago [-]
That caught my eye too, because VCs never poured "billions" into any one company pre-dot-com bust. Unicorns were notable for being valued at $1B or more, but never raised that much in funding, and this only happened much later, a decade and a half after Amazons IPO - Amazon itself was never a unicorn, its post-IPO capitalization was $300M.
Amazon is famous for making losses after being publicly listed. Also, it was remiss of grandparent to not note that Amazon's losses were intentional for the sake of growth. OpenAI has no such excuse: their losses are just so that they stay in the game; if they attempted to turn profitable today, they'd be insolvent within months.
csomar 2 days ago [-]
> There is no technical moat, but that doesn't mean there isn't a moat.
The moat is actually huge (billions of $$). What is happening is that there are people/corps/governments that are willing to burn this kind of money on compute and then give you the open weight model free of charge (or maybe with very permissive and lax licensing terms).
If it wasn't for that, there will be roughly three players in the market (Anthropic and recently Google)
robertlagrant 1 days ago [-]
Yeah, that's my thought as well.
chubot 2 days ago [-]
Amazon does have a technical moat - fast shipping and warehouse operations. It’s not something that Walmart or Target can replicate easily.
Retail margins are razor thin, so the pennies of efficiency add up to a moat
lotsofpulp 2 days ago [-]
Walmart and Target have tons of warehouse expertise. Amazon’s moat is their software, where they have a far longer head start and they can afford to pay their employees due to AWS and other services’ high margins, as well as executing well on growth resulting in being able to compensate with equity.
Walmart and Target were far better than Amazon at logistics at the beginning, but they couldn’t execute (or didn’t focus on) software development as well as Amazon.
Meanwhile, Amazon figured out how to execute logistics just as good or better than the incumbents, giving them the edge to overtake them.
chubot 2 days ago [-]
Walmart and Target already had warehouses, but the end-to-end system you need for 2-day, 1-day, same day shipping, which includes warehouses/trucks/drivers, is not something they had
They were built for people to come to their store, and the website was a second class citizen for a long time.
But that was Amazon's bread and butter. They built that fast-shipping moat as a pretty established company, and the big retailers were caught off guard
scarface_74 2 days ago [-]
Walmart and Target can get truckloads of items to a relatively few stores. Amazon can get items to people’s houses
lotsofpulp 2 days ago [-]
Right, which Amazon originally accomplished with huge assistance from expensive contracts with UPS, but they used software (and new hardware) and mobile internet technology to figure out how to reduce their delivery costs by being able to figure out how to incentivize cheaper independent contractors to deliver their packages, rather than expensive UPS union employees.
scarface_74 2 days ago [-]
They also built warehouses closer to the customers and bought Kiva to automate their warehouses.
sausagefeet 2 days ago [-]
Walmart has been a tech company since the 80s. The way Walmart got so big is that the created a literal network between stores so they could do logistics at scale. They had the largest database at the time.
Walmart is still the largest company in the world by revenue, with Amazon at its heels, with Amazon's profit beating out Walmart's.
A lot of this thread, I think, is just fantasy land that Amazon is somehow:
1. Destroying Walmart and Target in a way they can't compete.
2. Is more tech savvy than Walmart and Target.
C'mon, read the history of Walmart, it's who put technology into retail.
antics 2 days ago [-]
Just to emphasize this point: Walmart was to the IT revolution what Amazon is to the Internet revolution. They were among the first in the sector to move beyond paper and industrialize IT for operations, which allowed them to scale way faster and way more efficiently than their competition. Walmart's most powerful executives were IT executives, and many of them went on to have very decorated careers, e.g., Kevin Turner was the CIO immediately prior to becoming COO at Microsoft.
Walmart is not an Internet company. It is definitely a tech company. It's just that its tech is no longer super cool.
lotsofpulp 2 days ago [-]
Good point that Walmart got its edge against incumbents by incorporating advanced technology into their operations.
But for whatever reason, they took their foot off the pedal, and allowed Amazon to use the next step (networking technology and internet) to gain an edge over the now incumbent Walmart.
> 2. Is more tech savvy than Walmart and Target.
The market (via market cap) clearly thinks Amazon has lots more potential than its competitors, and I assume it is because investors think Amazon will be more successful using technology to advance.
raincole 2 days ago [-]
Facebook and ChatGPT are the opposite kinds of products.
mring33621 2 days ago [-]
Amazon's moat-ish stickiness:
1) Almost the best price, almost all the time
2) Reliably fast delivery
3) Reliably easy returns
4) Prime memberships
mring33621 2 days ago [-]
To follow on, I think OpenAI should split into 2:
1) B2B: Reliable, enterprise level AI-AAS
2) B2C: New social network where people and AIs can have fun together
OpenAI has a good brand name RN. Maybe pursuing further breakthroughs isn't in the cards, but they could still be huge with the start that they have.
benterix 2 days ago [-]
It is not a good analogy because services like Facebook, Amazon, Instagram depend on a critical mass of users (sellers, content sharers, creators, etc.). With SaaS this is not a crucial component as the service is generated by software, not other users.
conartist6 2 days ago [-]
Amazon built a bit of a moat too: for example I happen to know that they own a patent on the photography setup that allows them to capture consistent product photos at scale.
huijzer 2 days ago [-]
> Their "moat" is in public minds here.
Same for OpenAI. Anytime I talk to young people who are not programmers, they know about ChatGPT and not much else. Never heard of Llama nor what an LLM is.
umeshunni 2 days ago [-]
You could have said the same about Netscape in 1997. Noone knew what IE was (and if they did, thought it was inferior) and no one certainly knew what Chrome was (didn't exist). Yet, the browser market eventually didn't matter, got commoditized and what mattered were the applications built on top of it.
OJFord 2 days ago [-]
Surely by that analogy though ChatGPT is AskJeeves or something, not a browser; GPT3/4/o1/o3 or whatever is the browser.
gloryjulio 2 days ago [-]
> There's no technical moat around online sales. And lots of companies sell online. But Amazon is still the biggest (by a long long way) (at least in the US). Their "moat" is in public minds here.
I don't think that's true. I think it's actually the opposite. Global physical logistic is way harder than software to scale. That's Amazon's moat
scarface_74 2 days ago [-]
Amazon spent those billions on warehouses, server, its own delivery network, etc.
Anyone can sell online. But not just anyone has those advantages like same day abs next day shipping
DoctorOetker 2 days ago [-]
Where is the actual full statement and call for funds by the board?
noch 2 days ago [-]
> There is no technical moat, but that doesn't mean there isn't a moat.
Gruber writes:
"
My take on OpenAI is that both of the following are true:
OpenAI currently offers, by far, the best product experience of any AI chatbot assistant.
There is no technical moat in this field, and so OpenAI is the epicenter of an investment bubble.
"
It's amusing to me that he seems to think that OpenAI (or xAI or DeepSeek or DeepMind) is in the business of building "chatbots".
The prize is the ability to manufacture intelligence.
How much risk investors are willing to undertake for this prize is evident from their investments, after all, these investors all lived through prior business cycles and bubbles and have the institutional knowledge to know what they're getting into, financially.
How much would you invest for a given probability that the company you invest in will be able to manufacture intelligence at scale in 10 years?
swiftcoder 2 days ago [-]
> How much would you invest for a given probability that the company you invest in will be able to manufacture intelligence at scale in 10 years?
What's the expected return on investment for "intelligence"? This is extremely hard to quantify, and if you listen to the AI-doomer folks, potentially an extremely negative return.
noch 2 days ago [-]
> What's the expected return on investment for "intelligence"? This is extremely hard to quantify […] if you listen to the AI-doomer folks, potentially an extremely negative return.
Indeed. And that asymmetry is what makes a market: people who can more accurately quantify the value or risk of stuff are the ones who win.
If it were easy then we'd all invest in the nearest AI startup or short the entire market and 100x our net worth essentially overnight.
ben_w 2 days ago [-]
That logic applies for AI-cynics rather than AI-doomers — the latter are metaphorically the equivalent of warning about CO2-induced warming causing loss of ice caps and consequent sea level rises and loss of tens of trillions of dollars of real estate as costal cities are destroyed… in 1896*, when it was already possible to predict, but we were a long way from both the danger and the zeitgeist to care.
But only metaphorically the equivalent, as the maximum downside is much worse than that.
> But only metaphorically the equivalent, as the maximum downside is much worse than that.
Maybe I'm a glass-half-full sort of guy, but everyone dying because we failed to reverse man-made climate change doesn't seem strictly better than everyone dying due to rogue AI
mrbungie 2 days ago [-]
Everyone dying from a rogue AI would be stupid and embarrasing: we used resources that would've been better used fighting climate change, but ended up being killed by an hallucinating paperclip maximizer that came from said resources.
Stupid squared: We die because we gave the AI the order of reverting climate change xD.
ben_w 2 days ago [-]
Given the assumption that climate change would kill literally everyone, I would agree.
But also: I think it extremely unlikely for climate change to do that, even if extreme enough to lead to socioeconomic collapse and a maximal nuclear war.
Also also, I think there are plenty of "not literally everyone" risks from AI that will prevent us from getting to the "really literally everyone" scenarios.
So I kinda agree with you anyway — the doomers thinking I'm unreasonably optimistic, e/acc types think I'm unreasonably pessimistic.
noch 2 days ago [-]
> That logic applies for AI-cynics rather than AI-doomers
Fwiw, I don't believe that there are any AI doomers. I've hung out in their forums for several years and watched all their lectures and debates and bookmarked all their arguments with strangers on X and read all their articles and …
They talk of bombing datacentres, and how their children are in extreme danger within a decade or how in 2 decades, the entire earth and everything on it will have been consumed for material or, best case, in 2000 years, the entire observable universe will have been consumed for energy.
The doomers have also been funded to the tune of half a billion dollars and counting.
If these Gen-X'ers and millennials really believed their kids right now were in extreme peril due to the stupidity of everyone else, they'd be using their massive warchest full of blank cheques to stockpile weapons and hire hitmen to de-map every significant person involved in building AI. After all, what would you do if you knew precisely who are in the groups of people coming to murder your child in a few years?
But the true doomer would have to be the ultimate nihilist, and he would simply take himself off the map because there's no point in living.
ben_w 2 days ago [-]
> or, best case, in 2000 years, the entire observable universe will have been consumed for energy
You're definitely mixing things up, and the set of things may include fiction. 2000 years doesn't get you out of the thick disk region of our own galaxy at the speed of light.
> The doomers have also been funded to the tune of half a billion dollars and counting.
> If these Gen-X'ers and millennials really believed their kids right now were in extreme peril due to the stupidity of everyone else, they'd be using their massive warchest full of blank cheques to stockpile weapons and hire hitmen to de-map every significant person involved in building AI. After all, what would you do if you knew precisely who are in the groups of people coming to murder your child in a few years?
The political capital to ban it worldwide and enforce the ban globally with airstrikes — what Yudkowsky talked about was "bombing" in the sense of a B2, not Ted Kaczynski — is incompatible with direct action of that kind.
And that's even if such direct action worked. They're familiar with the luddites breaking looms, and look how well that worked at stopping the industrialisation of that field. Or the communist revolutions, promising a great future, actually taking over a few governments, but it didn't actually deliver the promised utopia. Even more recently, I've not heard even one person suggest that the American healthcare system might actually change as a result of that CEO getting shot recently.
But also, you have a bad sense of scale to think that "half a billion dollars" would be enough for direct attacks. Police forces get to arrest people for relatively little because "you and whose army" has an obvious answer. The 9/11 attacks may have killed a lot of people on the cheap, but most were physically in the same location, not distributed between several in different countries: USA (obviously), Switzerland (including OpenAI, Google), UK (Google, Apple, I think Stability AI), Canada (Stability AI, from their jobs page), China (including Alibaba and at least 43 others), and who knows where all the remote workers are.
Doing what you hypothesise about would require a huge, global, conspiracy — not only exceeding what Al Qaida was capable of, but significantly in excess of what's available to either the Russian or Ukrainian governments in their current war.
Also:
> After all, what would you do if you knew precisely who are in the groups of people coming to murder your child in a few years?
You presume they know. They don't, and they can't, because some of the people who will soon begin working on AI have not yet even finished their degrees.
If you take Altman's timeline of "thousands of days", plural, then some will not yet have even gotten as far as deciding which degree to study.
noch 2 days ago [-]
I somehow accidentally made you think that I was trying to have a debate about doomers, but I wasn't which is why I prefixed it with "fwiw" (meaning for-what-it's-worth; I'm a random on the internet, so my words aren't worth anything, certainly not worth debating at length) Sorry if I misrepresented my position. To be clear, I have no intense intellectual or emotional investment in doomer ideas nor in criticism of doomer ideas.
Anyway,
> You're definitely mixing things up, and the set of things may include fiction. 2000 years doesn't get you out of the thick disk region of our own galaxy at the speed of light.
Here's what Arthur Breitman wrote[^0] so you can take it up with him, not me:
"
1) [Energy] on planet is more valuable because more immediately accessible.
2) Humans can build AI that can use energy off-planet so, by extension, we are potential consumers of those resources.
3) The total power of all the stars of the observable universe is about 2 × 10^49 W. We consume about 2 × 10^13 W (excluding all biomass solar consumption!). If consumption increases by just 4% a year, there's room for only about 2000 years of growth.
"
About funding:
>> The doomers have also been funded to the tune of half a billion dollars and counting.
> I've never heard such a claim. LessWrong.com has funding more like a few million
"
A young nonprofit [The Future of Life Institute] pushing for strict safety rules on artificial intelligence recently landed more than a half-billion dollars from a single cryptocurrency tycoon — a gift that starkly illuminates the rising financial power of AI-focused organizations.
"
Will it bring untold wealth to its masters, or will it slip its leash and seek its own agenda.
Once you have an AI that can actually write code, what will it be able to do with its own source? How much better would open AI be with a super intelligence looking for efficiencies and improvements?
What will the super intelligence (and or its masters) do to build that moat and secure its position?
n_ary 2 days ago [-]
> How much risk investors are willing to undertake for this prize is evident from their investments, after all, these investors all lived through prior business cycles and bubbles and have the institutional knowledge to know what they're getting into, financially.
There are not that many unicorns these days, so anyone missing out on last unicorn decades are now in immense FOMO and is willing to bet big. Besides, AGI is considered(own opinion) personal skynet(wet dream of all nations’ military) that will do your bidding. Hence everyone wants a piece of that Pie. Also when the bigCo(M$/Google/Meta) are willing to bet on it, makes the topic much more interesting and puts invisible seal of approval from technically savvy corps, as the previous scammy cryptocurrency gold rush was not participated by any bigCo(to best of my knowledge) but GenAI is full game with all.
lodovic 2 days ago [-]
Part of the risk is the possibility that a few key employees find a much more profitable business model and leave OpenAI, while the early investors are left holding the bag. This seems to be a recurring theme in the tech world.
noch 2 days ago [-]
> Part of the risk is the possibility that a few key employees find a much more profitable business model and leave OpenAI,
The fact that you can state this risk means that market participants already know this risk and account for it in their investment model. Usually employees are given stock options (or something similar to a vesting instrument) to align them with the company, that is, they lose[^1] significant wealth if they leave the company. In the case of OpenAI: "PPUs vest evenly over 4 years (25% per year). Unlike stock options, employees do not need to purchase PPUs […] PPUs also are restricted by a 2-year lock, meaning that if there’s a liquidation event, a new hire can’t sell their units within their first 2 years."[^0]
> Amazon might be a good analogy here. I'm old enough to remember when Amazon absorbed billions of VC money, making losses year over year. Every day there was some new article about how insane it was.
When was this true? Amazon was founded Jul 1994, and a publicly listed company by May 1997. I highly doubt Amazon absorbed billions of dollars of VC money in less than 3 years of the mid 1990s.
As far as I can tell, they were very break even until Amazon Web Services started raking it in.
buster 2 days ago [-]
I don't know about vc money but Amazon was well known for spending all revenue into growth and, to my understanding, noone understood why. Why buy stocks of companies that don't make profit?
Nowadays, it's not unusual. Jeff Bezoz was laughed about for it. I think even on TV (some late night show in the 90s).
The talk shows segments are pre-planned to have “funny” quips and serve as marketing for the guests.
I was only a teenager, but I assume there had been lots of businesses throughout the course of history that took more than 5 years to be profitable.
The evidence is that investors were buying shares in it valuing it in the billions. Obviously, this is 1999 and approach peak bubble, but investing into a business for multiple years and waiting to earn a profit was not an alien idea.
I especially doubt it was alien to a mega successful celebrity and therefore I would bet Jay Leno is 100% lying about “not understanding” in this quote, and it is purely a setup for Bezos to respond so he can promote his business.
> “Here’s the thing I don’t understand, the company is worth billions and every time I pick up the paper each year it loses more money than it lost the year before,” says the seasoned talk show presenter, with the audience erupting into laughter off screen.
svnt 2 days ago [-]
They were run as a very unprofitable publicly traded company for a very long time — investor, VC, parent probably just went for some rough metonymy.
tehjoker 2 days ago [-]
Amazon's moat is their logistics network, comprehensive catalogue, prices, and value adds (like Prime Video).
toddmorey 2 days ago [-]
Genuinely curious if anyone has ideas how an LLM provider could moat AI. They feel interchangeable like bandwidth providers and seem like it will be a never ending game of leapfrog. I thought perhaps we’d end up with a small set of few top players just based on the scale of investment but now I’ve also seen impressive gains from much smaller companies & models, too.
scarmig 2 days ago [-]
1) Push for regulations around data provenance. If you train on anything, you have to prove that you had the rights to train on it. This would kill all but the largest providers in the USA, though China would still pose a problem. You could work around that bit by making businesses and consumers liable for usage of models that don't have proof of provenance.
2) If you had some secret algorithm that substantially outperformed everyone, you could win if you prevented leakage. This runs into the issue that two people can keep a secret, but three cannot. Eventually it'll leak.
3) Keep costs exceptionally low, sell at cost (or for free), and flood the market with that, which you use to enhance other revenue streams and make it unprofitable for other companies to compete and unappealing as a target for investors. To do this, you have to be a large company with existing revenue streams, highly efficient infrastructure, piles of money to burn while competitors burn through their smaller piles of money, and the ability to get something of value from giving something out for free.
sdesol 2 days ago [-]
> China would still pose a problem
Not if regulation prohibits LLMs from China, which isn't that far fetched to be honest.
I think LLM will turn into a commodity product and if you want to dominate with a commodity product, you need to provide twice the value at half the cost. Open AI will need a breakthrough in reliability and/or inference cost to really create a moat.
callc 2 days ago [-]
If US regulation prohibits China from training LLMs? How?
If you mean trying to stop GPUs getting to China, US already has tried that with specific GPU models, but China still gets them.
Seems hard/impossible to do. Even if US and CCP were trying to stop Chinese citizens and companies doing LLM stuff
sdesol 2 days ago [-]
Not prohibit training, but make it illegal for US companies to embed, use or distribute LLMs created by China. Basically the general consumer will need to go through the effort of use the LLM that they want, which we know means only a small fraction of people.
seanmcdirmid 2 days ago [-]
That assumes that only the US is interested in using LLMs commercially, which isn't really true. Even if you can get America to sanction Chinese LLM use, you aren't even going to American allies to go along with that, let alone everyone else.
China's biggest challenge ATM is that they do not yet economically produce the GPUs and RAM needed to train and use big models. They are still behind in semiconductors, maybe 10 or 20 years (they can make fast chips, they can make cheap chips, they can't yet make fast cheap chips).
sdesol 2 days ago [-]
Canada would be reluctant. So would Europe, South Asia and so forth. The biggest hurdle for China is the CCP. It is one thing to use physical products from China but relying on the CCP for knowledge may be a step too far for many nations.
seanmcdirmid 2 days ago [-]
Most people won't care if the products are useful. Chinese EVs, Chinese HSR, Chinese industrial robots, Chinese power tech, they are already selling. LLM isn't just a chatbot, it could be a medical device to help in areas without sufficiently trained doctors, for example.
sdesol 2 days ago [-]
Most people may not care but that doesn't matter if the government cares. I will not be surprised if countries restrict LLMs to trusted countries in the future. Unless there is a regime change in China, seeing it adopted in other countries may be an issue.
seanmcdirmid 2 days ago [-]
It is a very large world though, and nationalism is more of an American shtick at the moment. It is totally possible that countries have to decide to trade with China or the USA (if they put down an infective embargo), but then it really depends on what America offers vs. China, and I don't think that is a great proposition for us.
majormajor 2 days ago [-]
Nationalism is not merely having having a moment as an "American shtick", though I don't know much about how widespread it is in non-Western developing countries. Certainly might not be so much, there.
The real possibility exists that it would be better to be an independent 'second place' technology center (or third place, etc) than a pure-consumer of someone else's integrate tech stack.
China decided that a long time ago for the consumer web. Europe is considering similar things more than ever before. The US is considering it with TikTok.
It's not hard to see that expanding. It's hard to claim that forcing local development of tech was a failure for China.
Short of a breakthrough that means the everyday person no longer has to work, why would I rather have a "better" but wholly-foreign-country-owned, not-contributing-anything-to-the-economy-I-participate-in-daily LLM or image generator or what-have-you vs a locally-sourced "good enough" one?
sdesol 2 days ago [-]
We are definitely heading into untreaded territory. It is one thing to use Chinese EVs, but using a knowledge system that will be censored (not that other countries won't be censored) and trained in a way that may not align with a nation's beliefs is a whole different matter.
tw1984 2 days ago [-]
> using a knowledge system that will be censored (not that other countries won't be censored) and trained in a way that may not align with a nation's beliefs is a whole different matter.
this is the exact same story told to the public when Google was kicked out of China. you are just 15 years late for the party.
chii 2 days ago [-]
> They are still behind in semiconductors, maybe 10 or 20 years
i dont believe they're as behind as many analysis deems. In fact, making it illegal to export western chips to china only serves to cause the mother of all inventions, necessity, to pressure harder and make it work.
seanmcdirmid 1 days ago [-]
They will definitely throw more resources at it, but without even older equipment from the west, they have a bigger hill to climb as well. There are lots of material engineering secrets that they have to uncover before they get there, so that’s just my estimate of what they need to do it. I definitely could be wrong though, we’ll see.
tw1984 2 days ago [-]
> China's biggest challenge ATM is that they do not yet economically produce the GPUs and RAM needed to train and use big models. They are still behind in semiconductors, maybe 10 or 20 years (they can make fast chips, they can make cheap chips, they can't yet make fast cheap chips).
You don't need the most efficient chips to train LLMs. those much slower chips (e.g. those made by Huawei) will probably take longer for training and they waste more electricity and space. but so what?
seanmcdirmid 1 days ago [-]
Economy isn’t efficiency, China has a yield problem on the chips they need, and that reduces their progress.
int_19h 2 days ago [-]
And lest we forget, China is not bothered by building as many nuclear power plants as it needs.
tw1984 2 days ago [-]
because Chinese is investing heavily on all sorts of renewable energies. its annual investment is more than the total US and EU amount combined.
"China is set to account for the largest share of clean energy investment in 2024 with an estimated $675 billion, while Europe is set to account for $370 billion and the United States $315 billion."
Making something illegal isn't gonna work if there's clearly value in doing it, and isn't actually harmful. Regulatory unfairness is easily discerned.
Look at how competitive chinese EVs are, and no amount of tarriffs are gonna stop them from dominating the market - even if americans prevent their own market from being dominated, all of their allies will not be able to stop their own.
sdesol 2 days ago [-]
Like I've said before, this isn't a physical good that is being sold. LLM is knowledge and many governments and people are going to be concerned with how and who is packaging it. Using LLMs created by China will mean certain events in history will be omitted (which will not be exclusive to China). How the LLM responds will be dictated by the LLM training and so forth.
LLMs will become the ultimate propaganda tool (if they aren't already), and I don't see why governments wouldn't want to have full control over them.
manbart 2 days ago [-]
I'm pretty sure 3) is Meta's strategy currently
shalmanese 2 days ago [-]
In the early days of Google, people believed there could be absolutely no moat in search because competition was just "one click away" and even Google believed this and deeply internalized this into their culture of technological dominance as the only path to survival.
At the beginning of ride sharing, people believed there was absolutely no geographical moat and all riders were just one cheaper ride from switching so better capitalized incumbents could just win a new area by showering the city with discounts. It took Uber billions of dollars to figure out the moats were actually nigh insurmountable as a challenger brand in many countries.
Honestly, with AI, I just instinctively reach for ChatGPT and haven't even bothered trying with any of the others because the results I get from OAI are "good enough". If enough other people are like me, OAI gets order of magnitudes more query volume than the other general purpose LLMs and they can use that data to tweak their algorithms better than anyone else.
Also, current LLMs, the long term user experience is pretty similar to the first time user experience but that seems set to change in the next few generations. I want my LLM over time to understand the style I prefer to be communicated in, learn what media I'm consuming so it knows which references I understand vs those I don't, etc. Getting a brand new LLM familiar enough to me to feel like a long established LLM might be an arduous enough task that people rarely switch.
ksec 2 days ago [-]
>LLM familiar enough to me.....
The problem with ChartGPT is that that dont own any platform. Which means out of the 3 Billion Android + Chrome OS User, and ~1.5B iOS + Mac. They have zero. There only partner is Microsoft with 1.5B Window PC. Considering a lot of people only do work on Windows PC I would argue that personalisation comes from Smartphone more so than PC. Which means Apple and Google holds the key.
chrisvalleybay 2 days ago [-]
There's still one thing missing here: the browser. I do not agree with Gruber's analogy that the LLM is the browser. The interface to the LLM is the browser. We have seen some attempts at creating good browsers for LLM but we do not have NetScape, IE/Edge, Chrome, FF, Brave yet. Once we do, you would very easily be jumping between these models, and even letting the browser pick a model for you based on the type of question.
Also companies will be (and are) bundling these subscriptions for you, like Raycast AI, where you pay one monthly sum and get access to «all major models».
sumedh 2 days ago [-]
> The interface to the LLM is the browser
That is one of the reason why ChatGpt has a desktop App, so that users can directly interact with it and give access to users files/Apps as well.
chrisvalleybay 22 hours ago [-]
But it isn't a browser, because it only interfaces to one single LLM. You need to have multiple models there (like visiting websites).
l33tman 2 days ago [-]
They all work the same and each has its own pro's and con's for each model they launch. Even the API's are generic. It's a bit more difficult to lock in 3rd party partners using your API if your API literally is English. It's going to be a race to the bottom where the value in LLMs is the underlying value of the GPU-time they run on plus a few % upmark.
barnabyjones 2 days ago [-]
It is really unbelievable how much money companies will spend to avoid talking to and thoroughly understanding their users. OpenAI could probably learn a lot from interviewing 200 random people every 6 months and seeing what they use and why, but my guess is they would consider that frivolous.
Jabrov 2 days ago [-]
UXR is a thing that all large companies invest in
ChadNauseam 2 days ago [-]
what makes you think that they don’t?
btilly 2 days ago [-]
Three ideas. All tried, none worked yet.
First, get government regulation on your side. OpenAI has already looked for this, including Sam Altman testifying to Congress about the dangers of AI, but didn't get the regulations that they wanted.
Second, put the cost of competing out of reach. Build a large enough and good enough model that nobody else can afford to build a competitor. Unfortunately a few big competitors keep on spending similar sums. And much cheaper sums are good enough for many purposes.
Third, get a new idea that isn't public. For instance one on how to better handle complex goal directed behavior. OpenAI has been trying, but have failed to come up with the right bright idea.
phodo 2 days ago [-]
One view is that it’s not first mover, but first arriver advantage. Whoever gets to AgI (the fabled city of gold or silver?, Ag pun intended) will achieve exponential gains from that point forward, and that serves as the moat in the limit. So you can think of it as buying a delayed moat, with an option price equivalent to the investment required until you get to that point in time. Either you believe in that view or you don’t. It’s more of an emotional / philosophical investment thesis, with a low probability of occurrence, but with a massive expected value.Meanwhile, consumers and the world benefit.
nick3443 2 days ago [-]
What if the AGI takes an entire data center to process a few tokens per second. Is the still a first-arriver advantage? Seems like the first to make it cheaper than an equivalent-cost employee (fully loaded incl hiring and training) will begin to see advantage.
phodo 1 days ago [-]
Good point. It’s 2 conditions and both have to be true :
- Arrive first
- Use that first arrival to innovate with your new AGI pet / overlord to stay exponentially ahead
usrusr 2 days ago [-]
What if the next one to get there produces a similar service for 5% less? Race to the bottom.
And would AI that is tied to some interface that provides lock-in even be qualified to be called general? I have trouble pointing my finger on it, but AGI and lock-in causes a strong dissonance in my brain. Would AGI perhaps strictly imply commodity? (assuming that more than one supplier exists)
scarmig 2 days ago [-]
Depending on how powerful your model is, a few tokens per second per data center would still be extraordinarily valuable. It's not out of the realm of possibility that a next generation super intelligence could be trained with a couple hundred lines of pytorch. If that's the case, a couple tokens per second per data center is a steal.
Scaevolus 2 days ago [-]
Exponential gains from AGI requires recursive self improvement and the compute headroom to realize them. It's unclear if current LLM architectures make either of those possible.
esafak 1 days ago [-]
People need to stop talking about "exponential" gains; these models don't even have the ability to improve themselves, let alone at this or that rate. And who wants them to be able to train themselves while being connected to the Internet anyway? I sure don't. All it takes for major disruption is superhuman ability at subhuman prices.
jhanschoo 2 days ago [-]
What does AGI even mean in this case? If progress toward more capable and more cost-effective agents is incremental, I don't see a defensible moat. (You can maintain a moat given continued outpaced investment, but following remains more cost-effective)
aoeusnth1 2 days ago [-]
Since we're talking about the economic impact here, AGI(X) could be defined as being able to do X% of white collar jobs independently with about as much oversight as a human worker would need.
The exponential gains would come from increasing penetration into existing labor forces and industrial applications. The first arriver would have an advantage in being the first to be profitably and practically applicable to whatever domain it's used in.
jhanschoo 2 days ago [-]
Why would the gains be exponential? Assume that X "first arrival" develops a model with a certain rnd investment, and Y arrives next year with investment that's an order of magnitude less costly by following, and there's a simple enough switchover for customers. That's what's meant by no defensible moat; a counterexample is Google up to 2022 where for more than a decade nothing else came close in value prop. Maybe X now has an even better model with more investment, but Y is good enough and can charge way less even if their models are less cost-effective.
jpc0 2 days ago [-]
> ... Google up to 2022 where for more than a decade nothing else came close in value prop. Maybe X now has an even better model with more investment ...
I was very confused at this point because I haven't really seen X as a competitor to Google's ad business, at least not in investment and value prop... Then I saw you were using X as a variable...
Jensson 2 days ago [-]
> The exponential gains would come from increasing penetration into existing labor forces and industrial applications
Only if they are much cheaper than the equivalent work done by humans, but likely the first AGI will be way more expensive than humans.
cs702 2 days ago [-]
Yes, "first to arrive at AGI" could indeed become a moat, if OpenAI can get there before the clock runs out. In fact, that's what's driving all the spending.
dartos 2 days ago [-]
None of that would matter if they could find the holy grail though.
throwup238 2 days ago [-]
Every time a new model comes out I ask it to locate El Dorado or Shangri-La for me. That’s my criteria for AGI/ASI.
Alas I am still without my mythical city of gold.
selimthegrim 2 days ago [-]
Somebody is going to write Wizard of Oz for all this and I’m for it.
BlueTemplar 2 days ago [-]
Who needs a most if a curtain is good enough ?
selimthegrim 12 hours ago [-]
Moat?
lolinder 2 days ago [-]
Regulatory capture. Persuade governments that AI is so dangerous that it must be regulated, then help write those regulations to ensure no one else can follow you to the ladder.
That's half the point of OpenAI's game of pretending each new thing they make is too dangerous to release. It's half directed at investors to build hype, half at government officials to build fear.
esafak 1 days ago [-]
If Elon Musk's pals regulate away OpenAI because they declared their technology to be too dangerous that would be an ironic turn.
minimaxir 2 days ago [-]
One way to build a technical moat is to build services which encourage lock-in, and therefore make it hard to switch to a competitor. Some of OpenAI's product releases help facilitate that: the Assistant API creates persistent threads that cannot be exported, and their File Search APIs build vector stores that cannot be exported.
brookst 2 days ago [-]
Create value at a higher layer and depend on tight integration to generate revenue and stickiness at both layers. See: Windows + MS Office.
They don’t need to make a moat for AI, they need to make a moat for the OpenAI business, which they have a lot of flexibility to refactor and shape.
CamperBob2 2 days ago [-]
Genuinely curious if anyone has ideas how an LLM provider could moat AI.
Patents. OpenAI already has a head start in the game of filing patents with obvious (to everybody except USPTO examiners), hard-to-avoid claims. E.g.: https://patents.google.com/patent/US12008341B2
2 days ago [-]
Jyaif 2 days ago [-]
> curious if anyone has ideas how an LLM provider could moat AI
By knowing a lot about me, like the details of my relationships, my interests, my work. The LLM would then be able to be better function than the other LLMs. OpenAI already made steps in that direction by learning facts about you.
By offering services only possible by integrating with other industries, like restaurants, banks, etc... This take years to do, and other companies will take years to catch up, especially if you setup exclusivity clauses. There's lots of ways to slow down your competitors when you are the first to do something.
chrisvalleybay 2 days ago [-]
It is better to leave this up to the «LLM browser», than the LLM. Both because of privacy and portability.
seanmcdirmid 2 days ago [-]
Create closed source models that are much better than the other ones, don't publish your tricks to obtaining your results, and then lock them down in a box that can't be tampered with? I hope it doesn't go that way.
Alternatively, a model that takes a year and the output of a nuclear power plant to train (and then you can tell them about your tricks, since they aren't very reproducible).
marcyb5st 2 days ago [-]
An algorithmic breakthrough IMHO. If someone finds out either how to get 10x performance per parameter or how to have a model actually model real causality (to some degree) they will have a moat.
Also, I suspect that the next breakthrough will be kept under wraps and no papers will be published explaining it.
andy_ppp 2 days ago [-]
The people working on it will still be allowed to move companies and people talk to each other informally. These Chinese groups working on this with far fewer GPUs appear to be getting 10x results tbh. Maybe they have more GPUs than claimed but we’ll see.
mmaunder 2 days ago [-]
DeepSeek’s recent progress with V3 being a case in point which reportedly only cost $5.5M.
wmf 2 days ago [-]
You'd have to have either a breakthrough algorithm (that you keep secret forever) or proprietary training data.
malux85 2 days ago [-]
Legislation, if you can’t compete on merit then regulate.
adamnemecek 2 days ago [-]
Spend money developing proprietary approaches to ML.
n_ary 2 days ago [-]
I believe that, their brand “ChatGPT” is the moat. Here in EU, every news media and random strangers on the streets who don’t understand anything beyond TikTok also knows that AI=ChatGPT. The term is so ubiquitous with AI that, I am willing to bet when someone tells someone else about AI, they tell ChatGPT and the new person will search and use ChatGPT without any knowledge of Claude/Gemini/LLama et. al. This is the same phenomenon as web search is now = Google, despite other prominent ones existing.
The competitions are mostly too narrow(programming/workflow/translation etc) and not interesting.
otterley 1 days ago [-]
A brand isn't much of a moat: MySpace's brand was a moat for a few years. Then Facebook came and ate their lunch.
florakel 2 days ago [-]
Why is everyone so sure that technology can’t be the moat? Why are you convinced that all AI systems will be very similar in performance and cost, hence be interchangeable?
Isn’t Google the perfect example that you can create a moat by technological leadership in a nascent space? When Google came along it was 10X better than any alternative and they grabbed the whole market. Now their brand and market position make it almost impossible to compete.
I guess the bet is that one of the AI companies achieves a huge breakthrough and a 10X value creation compared to its competitors…
esafak 1 days ago [-]
ChatGPT isn't ten times better than Gemini or Claude, is it? Even if they magically released such a model, the competition would quickly catch up. The competition has similar or better resources.
amelius 2 days ago [-]
Maybe because for 99.9% of users (usecases) what today's llm technology offers is already good enough?
Or maybe nvidia has the moat. Or silicon fabs have it.
neals 2 days ago [-]
I get the moat idea. But are there really any moats? What's a good moat these days anyway? Isn't being first and "really good" fine as well?
ninth_ant 2 days ago [-]
The Network effect, and cultural mind share are two pretty effective moats.
Meta and X proven surprisingly resilient in the face of pretty overwhelming negative sentiment. Google maintains monopoly status on Web Search and in Browsers despite not being remarkably better than competition. Microsoft remains overwhelmingly dominant in the OS market despite having a deeply flawed product. Amazon sells well despite a proliferation of fake reviews and products. Netflix thrives even while cutting back sharply on product quality. Valve has a near-stranglehold on PC games distribution despite their tech stack being trivially replicable. The list goes on.
nemomarx 2 days ago [-]
To be fair despite Valve's tech stack being easy to replicate, their actual competition mostly hasn't replicated their feature set in full? Epic took a while to ramp up to "shopping carts" despite having a pretty large funding model, still doesn't have little gimmicks like trading cards and chat stickers, w/e. That's not really moat but it seems like the competition doesn't want to invest to exact parity.
(And a lot of stores like Ubisofts or EA's were very feature lite tbh.)
MoreMoore 2 days ago [-]
I have no idea how the other commenter could come to the conclusion that Steam is trivially replaceable.
jay_kyburz 2 days ago [-]
Yeah, on face value, Epic looks like they want to compete with Steam, but the developers of the Epic Game store are really phoning it in.
They have had years and still not even close, from both the consumer side and the developer side.
skydhash 2 days ago [-]
Almost all of these are either free or a marketplace (which is free to access). Only Windows is technically not free, but it comes free with the hardware you buy. And they came at a time when there was not a real competition. It's very hard to beat a free product.
cloverich 2 days ago [-]
Good point. Thinking... Facebook's infrastructure is enormously expensive to run. But they manage that for free. And chatgpt can place ads as easily as google. So openai needs to make it cheap to run, then ads, then victory.
People citing their current high prices would be right. But human brains are smarter than chatgpt, and vastly more energy efficient. So we know it's possible.
Does this oversimplify?
MoreMoore 2 days ago [-]
Steam isn't trivially replaceable. What are you talking about? No other platform matches them on feature set.
Amazon also has a significant advantage in its logistics that underpin their entire business across the globe and that nobody else can match.
You're also wrong about how Google maintains its monopoly, or Microsoft.
All I see is bias and an unwillingness to understand, well, any of the relevant topics.
kridsdale1 2 days ago [-]
Network effects are a moat.
A vertically integrated system that people depend on with non-portable integrations is a moat.
Regulatory Capture is a moat.
trhway 2 days ago [-]
What is the moat of the Google Search? To me the LLM is the first and the only disruptor so far of the Search.
andrewflnr 2 days ago [-]
It used to be their high search quality, difficult to replicate technology. Now they don't have one.
mcphage 2 days ago [-]
Well, Google’s willingness to pay potential competitors tens of billions of dollars per year to disincentivize developing a competing search engine is kind of a moat.
asadotzler 2 days ago [-]
Their moat is lock-in across services now. People are "Google" users not just "web search" users.
2 days ago [-]
brookst 2 days ago [-]
Google’s moat is eroding (filling?), but over the years it shifted from best product/tech to best brand.
Ekaros 2 days ago [-]
Google is still good enough for most searches. Say finding web page of restaurant or some cursory information on popular topic.
I suppose competitor would have to be really good on those times most users need something better.
So it is the brand and familiarity. It would need to get really bad even on most basic things to be replaced.
discordance 2 days ago [-]
muscle memory
GolfPopper 2 days ago [-]
Here's an idea for a moat: "You know all the ethically extremely questionable grabbing of personal data and creative work we did to build our LLM? Yeah, that's illegal now. For anyone else to do, that is."
otterley 1 days ago [-]
Copyright is an outstanding moat. Think Disney.
mrtksn 2 days ago [-]
> In 1995 investors mistakenly thought investing in Netscape was a way to bet on the future
Maybe they were just too early, later on it turned out that the browser is indeed a very valuable and financially sound investment. For Google at least.
So having a dominant market share can indeed be even if the underlying tech is not exactly unobtainable by others.
rchaud 1 days ago [-]
A browser would be worthless to Google without DoubleClick ad network monopoly, which they acquired years before Chrome. Netscape tried to charge for their browser and stopped because Microsoft was giving away Internet Explorer free with every Windows PC.
fmap 2 days ago [-]
Adding to this, inference is getting cheaper and more efficient all the time. The investment bubble is probably the biggest reason why inference hardware is so expensive at the moment and why startups in this sector are only targeting large scale applications.
Once this bubble bursts, local inference will become even more affordable than it already is. There is no way that there will be a moat around running models as a service.
---
Similarly, there probably won't be a "data moat". The whole point of large foundation models is that they are great priors. You need relatively few examples to fine tune an LLM or diffusion model to get it to do what you want. So long as someone releases up to date foundation models there is no moat here either.
yalogin 1 days ago [-]
This is what I have been asking people in the know too. It seems like developing this is straight forward. So not sure why OpenAI has a lead. That too with how fast things are, the gap should be instantly closed.
chii 2 days ago [-]
> OpenAI has a short-ish window of opportunity to figure out how to build a moat.
and they are probably going to go with regulatory moat over technical moat.
BeefWellington 2 days ago [-]
> OpenAI has a short-ish window of opportunity to figure out how to build a moat.
See the scramble to make themselves arbiters of "good AI" with regulators around the world. It's their only real hope but I think the cat's already out of the bag.
boh 1 days ago [-]
Honestly the more I use ChatGPT, the more I see of their "moat-ish" opportunities. While the LLM itself may be ultimately reproducible, the user experience needs a UI/UX wrapper for most users. The future is not user exposure to "raw" LLM interactions but rather a complex workflow whereby a question goes through several iterations of agents, deterministic computation using "traditional" processing, and figuring out when to use what and have it all be seamless. ChatGPT is already doing this but it's still far from where it needs to be. It's totally possible for a company to dominate the market if they're able to orchestrate a streamlined user experience. Whether that will ultimately be OpenAI is an open question.
Nasrudith 2 days ago [-]
I guess that explains all of the 'our product is an existential threat' chuunibyou bullshit. Trying to get a regulatory moat instead.
cyanydeez 1 days ago [-]
....or they could bewikipedia.
But greeed isgood!
numpy-thagoras 2 days ago [-]
OpenAI's engineers: brilliant, A+, A* even.
OpenAI's C-suite: Well, they earned the C, but it was a letter grade.
What a profoundly unimaginative strategy. No matter the industry, a large-scale diversion of resources towards a moonshot goal will likely get you to that goal. They haven't made an argument as to why we should do that just for them, especially with all of the other alternatives.
And no, advertising your previously secret testing models (e.g. o3) as if they were market competitors is not how to prove they should have our money.
abc-1 2 days ago [-]
> OpenAI's engineers: brilliant, A+, A* even.
Then why do they look about average to Anthropic, Mistral, DeepSeek, and many others of that cohort despite having x100 the amount of resources?
2 days ago [-]
numpy-thagoras 1 days ago [-]
Praising one doesn't mean condemning another.
1 days ago [-]
ryanwaggoner 2 days ago [-]
”They haven’t made an argument as to why we should do that just for them…”
What “we”? They’ve already raised billions, and I suspect they’re about to succeed at raising tens of billions more, despite the skepticism of random HN users.
numpy-thagoras 1 days ago [-]
Tens of billions? Sama has been talking about how their ask can be for over a trillion dollars invested. What "we"? That trillion is going to come out of our pockets one way or another, either as costs passed down, as government handouts from our tax dollars, as redirected investments that dry up opportunities for SWEs elsewhere, etc.
myhf 2 days ago [-]
Unimaginable Sums of Money Are All You Need
The Unreasonable Effectiveness of Huge Sums of Money in the Natural Sciences
Unimaginable Sums of Money Considered Harmful
SignalsDoctor 1 days ago [-]
The Subtle Art of Not Giving a F*ck: A Counterintuitive Approach to Unimaginable Sums of Money
seydor 2 days ago [-]
All we need is unreasonably effective paper titles
theideaofcoffee 2 days ago [-]
How I Learned to Stop Worrying and Love Unimaginable Sums of Money
yobbo 2 days ago [-]
Unimaginable Sums of Money: For Fun And Profit
ghshephard 2 days ago [-]
In all fairness regarding his comment about Netscape - anyone who invested in Netscape at the IPO (and certainly before the IPO) at $2.9B - made a ton of money on the Internet. On the last day of trading, after the AOL exit 4 years it was worth (cash, stock, etc...) around $10B. Thank Mike Homer for pushing that one through, and Jim Barksdale for being savvy enough to recognize it as the right play.
asadotzler 2 days ago [-]
Also, they started from scratch, they didn't take NCSA's Mosaic public, they left the university, brought in some established players, created a new company, and re-wrote the entire browser from scratch. SamA wants to take everything the non-profit created and just make it for profit. If he want's to do that he can create a new company, lure away the talent, start from scratch, and run from there. This "it's just like Mosaic -> Netscape" line is total bullshit and either Gruber doesn't remember like I do or he's intentionally misleading for some other goal.
MndlshnDscpl 2 days ago [-]
Agreed. I think originally, it would have been more accurate to say IE was based on Mosaic. If I recall correctly, I think Microsoft bought out Spyglass Mosaic to base IE on, and that browser had been licensed from the NCSA. Netscape on the other hand had originally been Mosaic Communications(anyone remember home.mcom.com?) and changed the name when they did the clean rewrite. I think the name Mozilla came about because they were looking for a 'Mosaic Killer' or something along those lines. Memories are kind of fuzzy so I'm sure someone on here has a better recollection.
asadotzler 2 days ago [-]
Yes. Microsoft licensed a version of Mosaic from Spyglass who got license from NCSA.The name change had nothing to do with the re-write. They never used Mosaic except in the name and were facing a suit from NCSA or the university (I can't remember) so they changed it. You are correct about Mozilla. I worked for Mozilla for 25 years and can attest to that.
Microsoft didn't buy it from Spyglass, they licensed it and agreed to pay royalties per copy. Then they gave it away for free to avoid paying royalties [0] while illegally sabotaging Netscape's business at the same time.
This is why I don't think LLMs will lead anywhere other than the present: they are very expensive. If top LLM company in the world can't bring those costs down in a sensible amount of time, it tells you everything you need to know about the future of LLMs. Rest assured, once OpenAI goes down and it really sounds like they are way past the peak, valuations for other LLMs companies will follow. And once they can't raise funding, all of them aside from those backed by big conglomerates (see Google's Gemini and Facebook's Llama) will survive.
The long term future of LLMs sure looks like the LLMs themselves will be commodities and the real value will lie in the use of those LLMs to deliver value
bravura 2 days ago [-]
People keep saying stuff like this, but I don't believe it: "There is no technical moat in this field, and so OpenAI is the epicenter of an investment bubble."
AI progress is driven by strong valuable data. Detailed important conversation with a chatbot are much more valuable that quick search queries. As LLMs extend past web UIs, there is even more interaction data to capture and learn from.
The company that captures the most human-AI interaction data will have a TREMENDOUS moat.
staticman2 2 days ago [-]
Is conversation data really that valuable?
If I have the LLM translate a text from French to English... what is there to learn from that? Maybe the translation is great maybe it's awful, but there's no "correct" translation to evaluate the LLM against to improve it.
If I ask the chatbot for working code and it can't provide it, again, there's no "correct" code to train it against found in the conversation.
If I ask an LLM to interpret a bible passage, whether it does a good job or a terrible job there's no "correct" answer that the provider has to use as the gold standard, just the noise of people chatting with arbitrary answers.
motoxpro 2 days ago [-]
When will this come to pass? OpenAI has many orders of magnitude more conversational data and Anthropic just keeps catching up. Until there is some evidence (OpenAI winning, or Google winning rather than Open Source catching up), I don't belive this is true.
ijustlovemath 2 days ago [-]
What do you think they've been building these multimodal models with? If I had Google application logs from the past few decades I would absolutely be transforming it and loading into my training dataset. I would not be surprised if Google/Meta/Msft are not doing this already
When the big companies say they're running out of data, I think they mean it literally. They have hoovered up everything external and internal and are now facing the overwhelming mediocrity that synthetic data provides.
skydhash 2 days ago [-]
Digital data are only a tiny part of the influx of information that people interact with. It's the same error platforms made with releasing movies straight to their services. Yes, people watch more movies at home, but going to cinemas is a whole experience that encompasses more than just watching a movie. Yes, books are great, but traveling and tutoring is much more impactful.
ijustlovemath 2 days ago [-]
Sorry, what does that have to do with an OpenAI tech moat?
skydhash 2 days ago [-]
>>> The company that captures the most human-AI interaction data will have a TREMENDOUS moat.
>> When the big companies say they're running out of data, I think they mean it literally. They have hoovered up everything external and internal and are now facing the overwhelming mediocrity that synthetic data provides.
> Digital data are only a tiny part of the influx of information that people interact with.
ijustlovemath 2 days ago [-]
I'm not sure how you'd get that non-digital data, though. Fundamentally that sounds like a process that doesn't scale to tbe level that they need. Can you explain more?
skydhash 2 days ago [-]
Sorry, I wasn't clear enough. I'm saying that for most problems, a lot of the relevant data is not digital. I'm a software developer, and most of the time, the task is to transcript some real world process to a digital equivalent. But most of the time, you lose the richness of interactions to gain repeatability, correctness, speed,...
So what people bothers writing down are just a pale reflection of what has been, the reader has to relies on his experience and imagination to recreate it. If we take drawing for example, you may read all the books on the subject, you still have to practice to properly internalize that knowledge. Same with music, or even pure science (the axioms you start with are grounded in reality).
I believe LLMs are great at extracting patterns from written text and other forms of notation. They may be even good at translating between them. But as anyone who is polyglot may attest, literal translation is often inadequate because lot of terms are not equivalent. Without experiencing the full semantic meaning of both, you'll always be at risk at being confusing.
With traditional software, we were the ones providing meanings so that different tools can interact with each other (when I click this icon, a page will be printed out). LLMs are mostly translation machines, but with just a thin veneer of syntax rules and terms relationships, but with no actual meaning, because of all the information that they lack.
ijustlovemath 2 days ago [-]
I actually think LLMs power comes as a result of their deep semantic understanding. For example, embeddings of gendered language, like "king" and "queen," have a very similar vector difference to "man" and "woman". This is true across all sorts of concepts if you really dive into the embeddings. That doesn't come without semantic understanding.
As another example, LLMs are kind of magical when it comes to what I'd call "bad memory spelunking". Is there a video game, book, or movie from your childhood, which you only have some vague fragments of, which you'd like to rediscover? Format those fragments into a request for a list of candidates, and if your description contains just enough detail, you will activate that semantic understanding to uncover what you were looking for.
I'd encourage you to check out 3blue1brown's LLM series for more on this!
I think it's true they lack a lot of information and understanding, and that they probably won't get better without more data, which we are running out of. That's sort of the point I was originally trying to make.
mrbungie 2 days ago [-]
Data is becoming a commodity in this regard. That can't be really their moat when Google, Anthropic, etc are publishing similar products.
asadotzler 2 days ago [-]
Well, once the powerful steal it all, of course it becomes commoditization. Had they been required to abide by the law, something that few Silicon Valley VC-backed companies really worry about, this kind of "information" would not be a commodity, it'd be a lucrative property. But, Silicon Valley doesn't give a fuck about anyone so they just stole it all and now it's basically worthless as a result.
robertlagrant 1 days ago [-]
> something that few Silicon Valley VC-backed companies really worry about
Is that actually true?
paxys 2 days ago [-]
Yup. We have seen time and again that companies don't need a "technical moat" to stay in the lead. First mover advantage and a 1-2 year head start is always enough. Of course they also need to keep their foot on the gas and not let their product get overtaken. With all the talent OpenAI has I'm pretty sure they will manage.
niemandhier 2 days ago [-]
Investors observed the pattern that the largest platform eats the market and bet on openAI moving along the same paths.
I disagree, that phenomenon is tied to the social network like phenomena we saw with WhatsApp and Facebook and to the aggregator business model of Amazon and Google.
Mathematically we can describe the process by which these monopolies form as nodes joining graphs:
Each node selects graphs to connect to in such a way that the probability is proportional to the number of nodes in the graph.
Sure Amazon and google feature two types of nodes, but the observation still fits:
Selling on Amazon makes sense if there are many customers, buying on Amazon makes sense if there are many offers.
OpenAIs business does not have this feature, it does not get intrinsically get more attractive by having more users.
ryao 2 days ago [-]
I once had a colleague that told me in order to make a small fortune, all you need to do is begin with a large fortune.
sumedh 2 days ago [-]
Was your colleague Richard Branson?
ryao 1 days ago [-]
No.
Gys 2 days ago [-]
A perfect fit for Softbank. I do not understand why a 'dream deal' between them has not happened yet?
epolanski 2 days ago [-]
I think their vision is as it should be ambitious, but I don't believe they can gain any real technical moat.
Models are commodities, even in the case Open ai goes through another major breakthrough nothing can stop some of their employees to run to other companies or founding their own and replicating or bettering the OpenAI results.
In fairness I realize that I don't use any of OpenAI's models. There are better alternatives for coding, translating or alternatives that are simply faster or cheaper (Gemini) or more open.
brookst 2 days ago [-]
Technical moats are the worst ones. The absolute best you can hope for is patents, but those take years to issue and then you have to fight after tbe fact. Trade secrets are nice but hard to defend.
The best moats are scale (Walmart), brand (“Google” means search), network effects (Facebook, TikTok). None of those are perfect but all are better than just having better tech.
otherme123 2 days ago [-]
Facebook and TikTok could collapse in under a year, that has happened before. Network effects seems very brittle: AIM or Fotolog doesn't exist, and they were huge. MySpace is a shadow of what it was.
OTOH, WD40 has a technical moat since 1953, without a patent. There are a number of companies who rely on technical moat mixed with excellent technical image: Makita or Milwaukee Tool come to mind. You can have also a company based on technology moat that theoretically shouldn't exist (patents expired, commodity products), but it does (Loctite, 3M).
janice1999 2 days ago [-]
Where is all the money going? Nvidia hardware?
KennyBlanken 2 days ago [-]
That and power. The GB200 needs 1000W per GPU.
Eight racks of GB200's? You're talking about 1 megawatt of power. Well over that if you include PSU losses, cooling, and all the networking, storage, memory...
smallmancontrov 2 days ago [-]
Spatulas in Jensen's decorative jar.
jazzyjackson 2 days ago [-]
IP Lawyers and Nuclear Lobbyists, presumably
esafak 1 days ago [-]
If we get nuclear fusion and better GPUs as a byproduct of enabling LLMs, great. We haven't had this kind of dynamic since the heft of Windows pushed Intel to deliver faster processors.
tempodox 2 days ago [-]
Not to mention all the spin doctors they need to conjure up a regulatory moat in absence of a technical one.
rsrsrs86 2 days ago [-]
And compute.
jgalt212 2 days ago [-]
Yes, but the most expensive CPU doesn't cost $40K.
tiffanyh 2 days ago [-]
> defensible moat … investors mistakenly thought investing in Netscape was a good way to bet on the future
Yet Chrome for Google did help create a moat.
A moat that’s is so strong the DoJ is investigating if Chrome should be a forced divesture from Google/Aplhabet.
Note: I do generally agree with the article, but this also shows why you shouldn’t use analogies to reason.
LittleTimothy 2 days ago [-]
I don't think you're analyzing this right. The web browser itself has no defensible moat. The reason that Chrome is winning is because Google has a massive money printer in the basement run by 3 guys and a hungry Alsatian (trained to bite anyone who messes with the money printer), and then 100,000 smart engineers desperately trying to find some application to throw that money at.
The result is Chrome, Google invests in it because they don't want to be intermediation between them and eyeballs - it's a strategic play, not necessarily a direct money maker. So that's why they do it. But there is no moat, they have the number 1 browser because they spend enormous sums of money hiring top engineers to build the number 1 browser.
And this has been true of the browser for a long time, Internet Explorer won because they forced people to use it and used a tonne of anti-competitive practices to screw the competition. Firefox still managed to disrupt that simply by building a better product. This is the clear evidence there is no moat - the castle has been stormed repeatedly.
alexey-salmin 2 days ago [-]
The moat for Chrome is "it's free because it's subsidized by Google's search monopoly". Before that moat for Explorer was "it's free becaus it's subsidized by Microsoft's OS monopoly", that's what killed Netscape, they couldn't subsidize the Navigator.
OpenAI can't do anything comparable to Google at this point because they have no other product. If anything they're more like Netscape.
scarface_74 2 days ago [-]
Google was already dominant before Chrome. Why would anyone want to buy Chrome instead of just basing a browser on Chromium? How are they going to monetize it?
1 days ago [-]
yodsanklai 2 days ago [-]
How do they define success after we gave them all the capital they're requesting?
fragmede 2 days ago [-]
According to 'The Information', AGI will be achieved only when OpenAI's AI systems generate profits exceeding $100 billion, so I'm guessing, for them, AGI = success.
benterix 2 days ago [-]
Frankly, OpenAI efforts seem quite funny for me personally as for most tasks I do Claude is far superior and yet OpenAI behaves as if they were the only game in town.
The so-called open source models are getting better and better and even if OpenAI suddenly discovered some new tech that would allow for another breakthrough, it will be immediately picked up by others.
sumedh 2 days ago [-]
> yet OpenAI behaves as if they were the only game in town.
The data backs it up, Anthropic make most of their revenue from API while ChatGpt makes most of its revenue from the plus plan.
betimsl 4 hours ago [-]
Don't we all?
dheera 2 days ago [-]
Plot twist: OpenAI uses their GPU farms to mine cryptocurrency so that they get those unimaginable sums of money
jgalt212 2 days ago [-]
That would be funny. Microstrategy with one level of abstraction.
YetAnotherNick 2 days ago [-]
> There is no technical moat in this field
This is getting so repetitive now that it is stated as a truism.
Isn't it the same bet yahoo was betting on in 2000 that it would win because their product branding is better? And now, Yahoo's and Microsoft's search engine is worse than Google from 2 decades ago.
scarmig 2 days ago [-]
Google's search engine is worse than Google from 2 decades ago.
otterley 1 days ago [-]
Is it Google's search engine, or is it that there was less pollution on the Internet two decades ago?
malermeister 2 days ago [-]
[dead]
bloomingkales 2 days ago [-]
Unimaginable sums of money can only come from oil rich countries. They are probably just mentally trying to accept that they are going to take the money from these nations.
The word "open" is still under threat in this scenario too.
2 days ago [-]
andrewstuart 2 days ago [-]
>> OpenAI currently offers, by far, the best product experience of any AI chatbot assistant.
Claude is noticeably better
dmazin 1 days ago [-]
I use Claude because the model is indeed better, but having listened to John's podcast I know that he means that the experience of using OpenAI overall is better.
hebrox 2 days ago [-]
And using Claude with Cline, partially because of prompt caching, it's noticeably cheaper as well.
robertlagrant 2 days ago [-]
This doesn't seem to say much to back its comparisons. Wouldn't spending a giant amount of money create some sort of moat?
mvdtnz 2 days ago [-]
From what I'm reading of the industry, no not really. The laughably huge investments seem to buy OpenAI approximately a 6 month head start over open source models, which is not much.
robertlagrant 2 days ago [-]
Why is this, though? Are they wasting the money? Or are their discoveries instantly disseminated to their competitors? Or are the open source models just currently sponsored by rich people?
AnimalMuppet 2 days ago [-]
Depends. If they manage to actually find a way to create an AGI, then a six month head start might be massive at that point. Until then... not so much.
wmf 2 days ago [-]
Not if your competitors are spending the same amount (or spending vastly less to get similar results in the case of DeepSeek).
wmf 2 days ago [-]
I see some people talking about OpenAI "stealing" IP from the nonprofit side, but remember that they went "capped profit" back in 2019. It's possible that GPT-3 and later were developed entirely by the capped-profit side anyway. I doubt there's much if any pre-2019 IP left.
onlyrealcuzzo 2 days ago [-]
There is no profit yet...
Jensson 2 days ago [-]
But they are expected to make profits in the future, that is what investments into for-profits are for.
asadotzler 2 days ago [-]
Did they license and pay for all of the non-profit work they utilized in GPT-3? OR are you claiming it was a clean-room re-write? I think you're pretty naive if you think the latter happened.
wmf 2 days ago [-]
I think they could have done just enough of both to plausibly neutralize Elon's lawsuit.
resonious 2 days ago [-]
The paraphrasing is very comical, but a bit different from what they actually said, which is more like "more than we had imagined".
thefz 2 days ago [-]
Well it's Gruber, not really known for being a decent blogger. He's the most biased person you can get acting like a prophet on the internet.
ahel 2 days ago [-]
Why Meta decided to open source their Llama architecture and complete models?
From a strategic business perspective, I'm trying to puzzle why they would give that up, a competitive advantage (even though it might not be as good as OpenAI product), rather than directly competing with OpenAI. What's the reasoning behind this decision?
dmazin 1 days ago [-]
Meta's is doing classical (and brilliant) "commodify your complements".
LLMs and AI is not where Meta adds value to the market. They add value via advertising. Therefore, AI is not Meta's product, and thus Meta should aim to make AI as cheap as possible.
They did the same thing for infra. Meta doesn't sell cloud computing, so they want to make infra as cheap as possible.
apbytes 2 days ago [-]
I'd imagine that Meta is trying to avoid an AI / LLM monopoly from happening as the primary goal.
They suffered when Apple was a gateway to their service for ios users and decided to shut their access to user data off.
And AI is clearly going to be used widely to aid in content generation if not doing entirely. Also they tried jumping on the chatbot hype with M or something. That didn't pan out as well.
By opening up AI, they would enable tools that allow users to pump out more content easily and spend more time in app.
Getting the goodwill of dev community is a bonus.
A brilliant move from Meta.
eden-u4 2 days ago [-]
meta is large enough (and diversified enough), when OpenAI valuation goes to pennies (because LLM won't solve any real problem, nor they'll remain relevant in the long term and open source alternative are more than enough for most chatbot usecases), they can acquire all the relevant technologies from OpenAI, Anthropic and such. The real question is why google didn't release their models.
yodsanklai 2 days ago [-]
> because LLM won't solve any real problem,
LLMs in their current form are very useful tools for many people. Most programmers I know use them daily. I have even friends who aren't tech savvy who paid the subscription to ChatGPT and use it all the time.
asadotzler 2 days ago [-]
Gruber's shilling some here. Andreesen et al. started over from scratch when they created Netscape. They didn't use any of the NCSA code. It was a ground up re-write.
SamA wants to take everything the non-profit built and use it directly in HIS for profit enterprise. Fuck him.
kragen 2 days ago [-]
Who and what do you think Gruber is shilling for? I don't see Gruber saying anything that might tempt anyone to part from some of their money. He seems to be doing the diametrical opposite of shilling.
RajT88 2 days ago [-]
Well, who does Gruber usually shill for?
The question is - why would that one company want this hot take out there?
bryant 2 days ago [-]
I'll bite because I find this compelling: if Apple is trying to build momentum for its Private Cloud Compute (https://security.apple.com/blog/private-cloud-compute/), it makes sense to try and get ahead of anything that could enable a competitor to build something similar. I'd think OpenAI getting billions shoveled their way in future investment just to throw darts at building any kind of moat would qualify, because for all Apple knows, OpenAI might eat their Private Cloud Compute lunch.
Granted I'm not sure there's much of anything to substantiate this, but I imagine Apple would know better from competitive intel.
(I feel a bit bad that my mind also immediately connects "Gruber" to "Apple" in less-than-savory ways, but alas, that's the reputation he built for himself.)
RajT88 2 days ago [-]
> I feel a bit bad that my mind also immediately connects "Gruber" to "Apple" in less-than-savory ways, but alas, that's the reputation he built for himself.
Yes this! I won't say how I've come to know, but if you have ever wondered "is this guy on the take from Apple as part of their fuckery?" - the answer is Yes.
asadotzler 2 days ago [-]
The "Silicon Valley can do no bad" crowd. He's making it sound like Altman's just following a well trod path here when they're clearly doing something novel and actually quite evil.
kragen 2 days ago [-]
He calls investing in OpenAI "wishful thinking" and a "Ponzi scheme", and as I read it, he's also criticizing the thing you're criticizing, though somewhat less harshly. He just seems to be under the mistaken impression that Mosaic Communications Corporation was a licensee of Mosaic like Spry was, which, well, you can see how that misunderstanding could have arisen.
mcphage 2 days ago [-]
> He's making it sound like Altman's just following a well trod path
Well trodden and failing path. You misread the article if you considered it any endorsement of OpenAI’s plan.
raincole 2 days ago [-]
He literally wrote this:
> OpenAI’s board now stating “We once again need to raise more capital than we’d imagined” less than three months after raising another $6.6 billion at a valuation of $157 billion sounds alarmingly like a Ponzi scheme
Ponzi scheme. If he's shilling something I guess it's "shorting".
mvdtnz 2 days ago [-]
What do you suppose he is shilling, and for whom?
I don't know who this Gruber guy is, but it's relieving to hear at least someone talk some sense about the ludicrous levels of investment being poured into something that most users go "heh that's neat" and would never consider paying for.
kridsdale1 2 days ago [-]
Gruber is OG. You should look him up.
scarface_74 2 days ago [-]
Outside of Apple circles, he’s best known for the inventor of Markdown.
asadotzler 2 days ago [-]
He's in the "My guys can do now bad" gang. Always has been. Read his blog, it's 90% defending Big Tech and their abuses. He's especially bad about Apple, but all his angles are suspect.
semiquaver 2 days ago [-]
“Do no bad?” This post explicitly calls OpenAI a Ponzi scheme! Did we read the same thing?
JojoFatsani 2 days ago [-]
Same
amelius 2 days ago [-]
So tired of people asking for giant piles of money that are high enough to corner a market.
minimaxir 2 days ago [-]
And due to the lack of a moat, no amount of money can do that easily.
rchaud 1 days ago [-]
...and claiming that it's for the good of society, and not personal self-enrichment.
jmclnx 2 days ago [-]
AI going down the Cold Fusion path ? Not much of a surprise to me
joseneca 1 days ago [-]
Don't we all...
TaurenHunter 2 days ago [-]
'unimaginable sums of money’ considered harmful.
jsheard 2 days ago [-]
Correction: all they need to succeed is unimaginable sums of money and the ironclad right to assimilate every work ever created by humanity without compensation. Who could refuse such a modest request?
So why do they pay to license so much content? And you can’t really be suggesting that public domain should exclude AI training?
jsheard 2 days ago [-]
That's a good question, why do they choose to pay for certain content while also continuing to insist that all publicly available content should be fair game? It's an incoherent stance, and the only plausible explanation I can come up with is that those "licensing deals" are out-of-court legal settlements in a trenchcoat.
pas 2 days ago [-]
It's understandable that they don't have the luxury of consistency. They make deals to reduce risks. (It's usually not a great business decision to sue someone who is paying you a lot of money, even if in theory the court could order them to pay more. But how long? Also there are already lawsuits ongoing. Maybe they want to get those dropped?)
xboxnolifes 2 days ago [-]
To sell the illusion of not using copyrighted works they make it publicly aware that they license some works.
brookst 2 days ago [-]
You think the public in general is highly concerned with protecting copyrighted works? Like, they’re all saying “I would love to pay OpenAI but they are infringing on my beloved Reuters copyrights”?
xboxnolifes 1 days ago [-]
"The public"? No. But it makes for nice investigation defense to be able to say you've made an attempt to avoid training on copyrighted works.
This is a link to an editorial by John Gruber, so not necessarily the same discussion.
ren_engineer 2 days ago [-]
and yet deepseek just created an amazing model with the fraction of their compute resources
snats 2 days ago [-]
It's more of a distilled model, not a fair 1:1 comparison
stri8ed 2 days ago [-]
Trained at least in part on Chat-GPT data.
Handprint4469 2 days ago [-]
Doesn't matter. In fact, it makes it even funnier: all these investors spending billions of dollars on OpenAI just end up subsidizing the competing models.
rubslopes 2 days ago [-]
That was trained on data scrapped from the web. I'd say it's fair.
eptcyka 2 days ago [-]
Worst person one knows has a legitimately good opinion.
patrickhogan1 2 days ago [-]
I agree that at its current rate of $20 per month and continuous cash burn, OpenAI’s model doesn’t seem sustainable. But the same could have been said about Google in its early days. Back then, Google spent heavily to index websites without a clear path to profitability—until it figured out how to monetize. Similarly, OpenAI will likely release more products to generate revenue.
OpenAI has the potential to create the next groundbreaking innovation, like the iPhone. Who needs apps when the AI itself can handle everything?
VHRanger 2 days ago [-]
> Google spent heavily to index websites without a clear path to profitability
You might find their perspective on advertising surprising.
Returning to my earlier point, please expand your argument about why OpenAI may fail. Let’s aim to elevate the discussion.
int_19h 1 days ago [-]
I would say that this paper rather supports the point: the people who made Google knew from the get go that ads were one way to make money off it (even if they didn't like the notion).
OpenAI, in contrast, still doesn't have any notion as to how to make it all profitable.
patrickhogan1 2 days ago [-]
If you’re going to downvote, please consider leaving a comment—I’d really value the feedback.
I’d like to see more thoughtful discussion here. The debate often feels stuck between oversimplified takes like “It’s just a commodity” and “No, it’s a game changer!” Let’s assume for a moment that OpenAI operates as a commodity business but still figures out how to become profitable, even with small margins. After all, US Steel was a commodity and became hugely successful.
Given these assumptions, why do you think OpenAI might not succeed? Let’s move the conversation forward.
lesuorac 2 days ago [-]
> Back then, Google spent heavily to index websites without a clear path to profitability
[citation needed]
Google's goal was always advertising just like every other search engine.
patrickhogan1 2 days ago [-]
Please read the academic paper by Larry Page and Sergey Brin, available at this link:
You might find their perspective on advertising surprising.
Returning to my earlier point, please expand your argument about why OpenAI may fail. Let’s aim to elevate the discussion.
lesuorac 1 days ago [-]
What perspective? The only comments about an ad is in the first paragraph which is just "To make matter worse, some advertisers attempt to gain people’s attention by taking measures meant to mislead automated search engines." [1]
Nothing in the paper states or suggests they are anti-ads. Hell, Ads can even work well with their description of page rank. Display ads along side the search results and if a user clicks it then rate the ad higher and if they don't down weight it akin to what you're doing with non-ad links.
> Returning to my earlier point, please expand your argument about why OpenAI may fail. Let’s aim to elevate the discussion.
Because the counterpoint of Google isn't applicable. You can't be like "X is ok because it's similar to Y and Y was ok.". X (OpenAI) isn't similar to Y (Google); Google always planned to do Ads and it's just their stance of keeping them distinct from results that changed over time. A better argument would to pick a Y of Reddit or something that took years to actually generate revenue.
OpenAI should be fine for quite some time though because what else is there to burn billions of dollars investing for people?
Google didn't have ads until 2000. They strongly resisted banner ads (which were the only format at the time) until they invented contextual text ads.
lesuorac 2 days ago [-]
So, it took Google a whole year to display ads. That's really not the description of a company that is " without a clear path to profitability".
DannyBee 2 days ago [-]
It would be more accurate to say it had no desired path to profitability.
I have friends who interviewed in 1999, and all of them came away with the same view - they did not know how the company would make money. They all poked at this topic, and the founders were explicit they really did not want to sell ads.
Two of them refused to join because they thought it would fail as a result[1].
So while they did end up doing that, it seems very clear they did not want to go that path if they could avoid it.
If you have actual evidence that suggests this is not the case, rather than the random unsupported conjecture that it was "always their goal", i'd love to hear it!
I've not met a single person who says anything but what i just said - it was not always their goal, they had no desire to do it.
[1] Amusingly both ended up founding and selling companies for over 100m, so they did okay in the end despite theoretically missing a huge opportunity ;)
patrickhogan1 2 days ago [-]
It’s fascinating to think about how long it took Google to fully embrace advertising. I completely agree with the previous posters point—Google strongly resisted ads, which were everywhere back then.
Then they went full corporate and their real turning point, came in 2007 with their acquisition of DoubleClick, about 12 years after Larry Page and Sergey Brin famously claimed that advertising and search engines don’t mix.
Now compare that to OpenAI. It launched in 2015, nine years ago, but ChatGPT didn’t arrive until 2022—less than two years ago. Google took over a decade to transform its core business with ads and figure out how to become a profit machine.
Let's practice some intellectual humility and acknowledge that OpenAI still has time to work this out. History shows that even if it takes some time, that's perfectly fine.
1 days ago [-]
Rendered at 04:27:44 GMT+0000 (Coordinated Universal Time) with Vercel.
> There is no technical moat in this field, and so OpenAI is the epicenter of an investment bubble. Thus, effectively, OpenAI is to this decade’s generative-AI revolution what Netscape was to the 1990s’ internet revolution. The revolution is real, but it’s ultimately going to be a commodity technology layer, not the foundation of a defensible proprietary moat. In 1995 investors mistakenly thought investing in Netscape was a way to bet on the future of the open internet and the World Wide Web in particular.
OpenAI has a short-ish window of opportunity to figure out how to build a moat.
"Trying to spend more" is not a moat, because the largest US and Chinese tech companies can always outspend OpenAI.
The clock is ticking.
Amazon might be a good analogy here. I'm old enough to remember when Amazon absorbed billions of VC money, making losses year over year. Every day there was some new article about how insane it was.
There's no technical moat around online sales. And lots of companies sell online. But Amazon is still the biggest (by a long long way) (at least in the US). Their "moat" is in public minds here.
Google is a similar story. As is Facebook. Yes the details change, but rhe basic path is well trodden. Uber? Well the juries still out there.
Will OpenAI be the next Amazon? Will it be the next IBM? We don't know, but people are pouring billions in to find out.
This is very different from OpenAI: if you show me a product that works just as well as ChatGPT but costs less--or which costs the same but works a bit better--I would use it immediately. Hell: people on this website routinely talk about using multiple such services and debate which one is better for various purposes. They kind of want to try to make a moat out of their tools feature, but that is off the path of how most users use the product, and so isn't a useful defense yet.
It’s slow and painful, but the expense is driving some customers away.
It is widely understood that you really can't compete with "as good as". People won't leave Google, Facebook, etc. if you can only provide a service as good as, because the effort required to move would not be worth it.
> if you show me a product that works just as well as ChatGPT but costs less--or which costs the same but works a bit better--I would use it
This is why I believe LLMs will become a commodity product. The friction to leave one service is nowhere near as as great as leaving a social network. LLMs in their current state are very much interchangeable. OpenAI will need a technological breakthrough in reliability and/or cost to get people to realize that leaving OpenAI makes no sense.
Sure you can, that's why there are hundreds if not thousands of brands of gas station. The companies you list are unusual exceptions, not the way things usually work.
For starters they have delivery down. In major cities you can get stuff delivered in hours. That is crazy and hard to replicate.
They have a huge inventory/marketplace. Basically any product is available. That is very difficult to replicate.
Amazon’s vertical integration is their mote.
I said; >> There is no technical moat, but that doesn't mean there isn't a moat.
Meaning that just because the moat is not technical doesn't mean it doesn't exist.
Clearly Amazon, Google, Facebook etc have moats, but they are not "better software". They found other things to act as the moat (distribution, branding, network effects).
OpenAI will need to find a different moat than just software. And I agree with all the people in this part if the thread driving that point home.
OpenAI right now some novel combination of a worker bee and queryable encyclopedia. If they are trying to make a marketplace argument for this, it would be based on their data sources, which may have a similar sort of first-mover advantage as a marketplace as they get closed off and become more expensive (see eg reddit api twitter api changes), except that much of those data age out, in a way that sellers and buyers in a marketplace do not.
The other big difference with a marketplace is constrained attention on the sell/fulfillment side. Data brokers do not have this constraint — data is easier to self-distribute and infinitely replicable.
You've basically described Temu
Not to mention they’ve created a pretty formidable enterprise sales function. That’s the real money maker long term and outside of Google, it’s hard to imagine any of the current players outcompeting OpenAI.
the recent release of deepseek v3 is a good example, o1 level model trained under 6 million USD, it pretty much beat openai by a large margin.
edit: Fair enough. I'm fishing for opinions too.
OpenAI is going to be beaten on price, wait and see.
EDIT: Oh it did, wow, and it's better than Claude! Fantastic, this is great news, thank you!
Defaults are powerful over time.
(Sure you might say I'll subscribe to both, $20, $40, it's no big deal - but the masses won't, people already agonise over and share (I do too!) video streaming services which are typically cheaper.)
More interestingly to your thread is how does Craigslist supplant print classifieds, which then is challenged if not supplanted by Facebook Marketplace. Both the incumbents had significantly better marketplace dynamics prior to being overtaken.
Does the average Temu user care about the company's ethical problems? Does the average Amazon user?
Apparently old enough to forget the details. I highly recommend refreshing your memory on the topic so you don’t sound so foolish.
1. Amazon had a very minimal amount of VC funding (less than $100M, pretty sure less than $10M)
2. They IPO’d in 1997 and that still brought in less than $100M to them.
3. They intentionally kept the company at near profitability, instead deciding to invest it into growth for the first 4-5yrs as a public company. It’s not that they were literally burning money like Uber.
4. As further proof they could’ve been profitable sooner if they wanted, they intentionally showed a profit in ~2001 following the dotcom crash.
Edit: seems the only VC investment pre-IPO was KP’s $8M. Combine that with the seed round they raised from individual investors and that comes in under $10M like I remembered.
Right now, OpenAI's brand is actually probably its strongest "moat" and that is probably only there because Google fumbled Bard so badly.
Facebook has an enormous network effect. Google is the lynchpin of brokering digital ads to the point of being a monopoly. Someone else mentioned Amazons massive distribution network.
no, the amazon moat is scale, and efficiency, which leads to network effect. The chinese competitors are reaching similar scales, so the moat isn't insurmountable - just not for the average mom and dad business.
Not to take away from the rest of your points, but I thought Amazon only raised $8m in 1995 before their IPO in 1997. Very little venture capital by today’s standard.
Amazon is famous for making losses after being publicly listed. Also, it was remiss of grandparent to not note that Amazon's losses were intentional for the sake of growth. OpenAI has no such excuse: their losses are just so that they stay in the game; if they attempted to turn profitable today, they'd be insolvent within months.
The moat is actually huge (billions of $$). What is happening is that there are people/corps/governments that are willing to burn this kind of money on compute and then give you the open weight model free of charge (or maybe with very permissive and lax licensing terms).
If it wasn't for that, there will be roughly three players in the market (Anthropic and recently Google)
Retail margins are razor thin, so the pennies of efficiency add up to a moat
Walmart and Target were far better than Amazon at logistics at the beginning, but they couldn’t execute (or didn’t focus on) software development as well as Amazon.
Meanwhile, Amazon figured out how to execute logistics just as good or better than the incumbents, giving them the edge to overtake them.
They were built for people to come to their store, and the website was a second class citizen for a long time.
But that was Amazon's bread and butter. They built that fast-shipping moat as a pretty established company, and the big retailers were caught off guard
Walmart is still the largest company in the world by revenue, with Amazon at its heels, with Amazon's profit beating out Walmart's.
A lot of this thread, I think, is just fantasy land that Amazon is somehow:
1. Destroying Walmart and Target in a way they can't compete.
2. Is more tech savvy than Walmart and Target.
C'mon, read the history of Walmart, it's who put technology into retail.
Walmart is not an Internet company. It is definitely a tech company. It's just that its tech is no longer super cool.
But for whatever reason, they took their foot off the pedal, and allowed Amazon to use the next step (networking technology and internet) to gain an edge over the now incumbent Walmart.
> 2. Is more tech savvy than Walmart and Target.
The market (via market cap) clearly thinks Amazon has lots more potential than its competitors, and I assume it is because investors think Amazon will be more successful using technology to advance.
1) Almost the best price, almost all the time
2) Reliably fast delivery
3) Reliably easy returns
4) Prime memberships
1) B2B: Reliable, enterprise level AI-AAS
2) B2C: New social network where people and AIs can have fun together
OpenAI has a good brand name RN. Maybe pursuing further breakthroughs isn't in the cards, but they could still be huge with the start that they have.
Same for OpenAI. Anytime I talk to young people who are not programmers, they know about ChatGPT and not much else. Never heard of Llama nor what an LLM is.
I don't think that's true. I think it's actually the opposite. Global physical logistic is way harder than software to scale. That's Amazon's moat
Anyone can sell online. But not just anyone has those advantages like same day abs next day shipping
Gruber writes:
" My take on OpenAI is that both of the following are true:
OpenAI currently offers, by far, the best product experience of any AI chatbot assistant. There is no technical moat in this field, and so OpenAI is the epicenter of an investment bubble. "
It's amusing to me that he seems to think that OpenAI (or xAI or DeepSeek or DeepMind) is in the business of building "chatbots".
The prize is the ability to manufacture intelligence.
How much risk investors are willing to undertake for this prize is evident from their investments, after all, these investors all lived through prior business cycles and bubbles and have the institutional knowledge to know what they're getting into, financially.
How much would you invest for a given probability that the company you invest in will be able to manufacture intelligence at scale in 10 years?
What's the expected return on investment for "intelligence"? This is extremely hard to quantify, and if you listen to the AI-doomer folks, potentially an extremely negative return.
Indeed. And that asymmetry is what makes a market: people who can more accurately quantify the value or risk of stuff are the ones who win.
If it were easy then we'd all invest in the nearest AI startup or short the entire market and 100x our net worth essentially overnight.
But only metaphorically the equivalent, as the maximum downside is much worse than that.
https://en.m.wikipedia.org/wiki/Svante_Arrhenius
Maybe I'm a glass-half-full sort of guy, but everyone dying because we failed to reverse man-made climate change doesn't seem strictly better than everyone dying due to rogue AI
Stupid squared: We die because we gave the AI the order of reverting climate change xD.
But also: I think it extremely unlikely for climate change to do that, even if extreme enough to lead to socioeconomic collapse and a maximal nuclear war.
Also also, I think there are plenty of "not literally everyone" risks from AI that will prevent us from getting to the "really literally everyone" scenarios.
So I kinda agree with you anyway — the doomers thinking I'm unreasonably optimistic, e/acc types think I'm unreasonably pessimistic.
Fwiw, I don't believe that there are any AI doomers. I've hung out in their forums for several years and watched all their lectures and debates and bookmarked all their arguments with strangers on X and read all their articles and …
They talk of bombing datacentres, and how their children are in extreme danger within a decade or how in 2 decades, the entire earth and everything on it will have been consumed for material or, best case, in 2000 years, the entire observable universe will have been consumed for energy.
The doomers have also been funded to the tune of half a billion dollars and counting.
If these Gen-X'ers and millennials really believed their kids right now were in extreme peril due to the stupidity of everyone else, they'd be using their massive warchest full of blank cheques to stockpile weapons and hire hitmen to de-map every significant person involved in building AI. After all, what would you do if you knew precisely who are in the groups of people coming to murder your child in a few years?
But the true doomer would have to be the ultimate nihilist, and he would simply take himself off the map because there's no point in living.
You're definitely mixing things up, and the set of things may include fiction. 2000 years doesn't get you out of the thick disk region of our own galaxy at the speed of light.
> The doomers have also been funded to the tune of half a billion dollars and counting.
I've never heard such a claim. LessWrong.com has funding more like a few million: https://www.lesswrong.com/posts/5n2ZQcbc7r4R8mvqc
> If these Gen-X'ers and millennials really believed their kids right now were in extreme peril due to the stupidity of everyone else, they'd be using their massive warchest full of blank cheques to stockpile weapons and hire hitmen to de-map every significant person involved in building AI. After all, what would you do if you knew precisely who are in the groups of people coming to murder your child in a few years?
The political capital to ban it worldwide and enforce the ban globally with airstrikes — what Yudkowsky talked about was "bombing" in the sense of a B2, not Ted Kaczynski — is incompatible with direct action of that kind.
And that's even if such direct action worked. They're familiar with the luddites breaking looms, and look how well that worked at stopping the industrialisation of that field. Or the communist revolutions, promising a great future, actually taking over a few governments, but it didn't actually deliver the promised utopia. Even more recently, I've not heard even one person suggest that the American healthcare system might actually change as a result of that CEO getting shot recently.
But also, you have a bad sense of scale to think that "half a billion dollars" would be enough for direct attacks. Police forces get to arrest people for relatively little because "you and whose army" has an obvious answer. The 9/11 attacks may have killed a lot of people on the cheap, but most were physically in the same location, not distributed between several in different countries: USA (obviously), Switzerland (including OpenAI, Google), UK (Google, Apple, I think Stability AI), Canada (Stability AI, from their jobs page), China (including Alibaba and at least 43 others), and who knows where all the remote workers are.
Doing what you hypothesise about would require a huge, global, conspiracy — not only exceeding what Al Qaida was capable of, but significantly in excess of what's available to either the Russian or Ukrainian governments in their current war.
Also:
> After all, what would you do if you knew precisely who are in the groups of people coming to murder your child in a few years?
You presume they know. They don't, and they can't, because some of the people who will soon begin working on AI have not yet even finished their degrees.
If you take Altman's timeline of "thousands of days", plural, then some will not yet have even gotten as far as deciding which degree to study.
Anyway,
> You're definitely mixing things up, and the set of things may include fiction. 2000 years doesn't get you out of the thick disk region of our own galaxy at the speed of light.
Here's what Arthur Breitman wrote[^0] so you can take it up with him, not me:
"
1) [Energy] on planet is more valuable because more immediately accessible.
2) Humans can build AI that can use energy off-planet so, by extension, we are potential consumers of those resources.
3) The total power of all the stars of the observable universe is about 2 × 10^49 W. We consume about 2 × 10^13 W (excluding all biomass solar consumption!). If consumption increases by just 4% a year, there's room for only about 2000 years of growth.
"
About funding:
>> The doomers have also been funded to the tune of half a billion dollars and counting.
> I've never heard such a claim. LessWrong.com has funding more like a few million
" A young nonprofit [The Future of Life Institute] pushing for strict safety rules on artificial intelligence recently landed more than a half-billion dollars from a single cryptocurrency tycoon — a gift that starkly illuminates the rising financial power of AI-focused organizations. "
---
[^0]: https://x.com/ArthurB/status/1872314309251825849
[^1]: https://www.politico.com/news/2024/03/25/a-665m-crypto-war-c...
Will it bring untold wealth to its masters, or will it slip its leash and seek its own agenda.
Once you have an AI that can actually write code, what will it be able to do with its own source? How much better would open AI be with a super intelligence looking for efficiencies and improvements?
What will the super intelligence (and or its masters) do to build that moat and secure its position?
There are not that many unicorns these days, so anyone missing out on last unicorn decades are now in immense FOMO and is willing to bet big. Besides, AGI is considered(own opinion) personal skynet(wet dream of all nations’ military) that will do your bidding. Hence everyone wants a piece of that Pie. Also when the bigCo(M$/Google/Meta) are willing to bet on it, makes the topic much more interesting and puts invisible seal of approval from technically savvy corps, as the previous scammy cryptocurrency gold rush was not participated by any bigCo(to best of my knowledge) but GenAI is full game with all.
The fact that you can state this risk means that market participants already know this risk and account for it in their investment model. Usually employees are given stock options (or something similar to a vesting instrument) to align them with the company, that is, they lose[^1] significant wealth if they leave the company. In the case of OpenAI: "PPUs vest evenly over 4 years (25% per year). Unlike stock options, employees do not need to purchase PPUs […] PPUs also are restricted by a 2-year lock, meaning that if there’s a liquidation event, a new hire can’t sell their units within their first 2 years."[^0]
--
[0]: https://www.levels.fyi/blog/openai-compensation.html
[1]: Ex-OpenAI employee reported losing 85% of his family's net worth. https://news.ycombinator.com/item?id=40406307
When was this true? Amazon was founded Jul 1994, and a publicly listed company by May 1997. I highly doubt Amazon absorbed billions of dollars of VC money in less than 3 years of the mid 1990s.
https://dazeinfo.com/2019/11/06/amazon-net-income-by-year-gr...
As far as I can tell, they were very break even until Amazon Web Services started raking it in.
Edit: Jay Leno, 1999: https://www.unilad.com/film-and-tv/news/jeff-bezos-audience-...
I was only a teenager, but I assume there had been lots of businesses throughout the course of history that took more than 5 years to be profitable.
The evidence is that investors were buying shares in it valuing it in the billions. Obviously, this is 1999 and approach peak bubble, but investing into a business for multiple years and waiting to earn a profit was not an alien idea.
I especially doubt it was alien to a mega successful celebrity and therefore I would bet Jay Leno is 100% lying about “not understanding” in this quote, and it is purely a setup for Bezos to respond so he can promote his business.
> “Here’s the thing I don’t understand, the company is worth billions and every time I pick up the paper each year it loses more money than it lost the year before,” says the seasoned talk show presenter, with the audience erupting into laughter off screen.
2) If you had some secret algorithm that substantially outperformed everyone, you could win if you prevented leakage. This runs into the issue that two people can keep a secret, but three cannot. Eventually it'll leak.
3) Keep costs exceptionally low, sell at cost (or for free), and flood the market with that, which you use to enhance other revenue streams and make it unprofitable for other companies to compete and unappealing as a target for investors. To do this, you have to be a large company with existing revenue streams, highly efficient infrastructure, piles of money to burn while competitors burn through their smaller piles of money, and the ability to get something of value from giving something out for free.
Not if regulation prohibits LLMs from China, which isn't that far fetched to be honest.
I think LLM will turn into a commodity product and if you want to dominate with a commodity product, you need to provide twice the value at half the cost. Open AI will need a breakthrough in reliability and/or inference cost to really create a moat.
If you mean trying to stop GPUs getting to China, US already has tried that with specific GPU models, but China still gets them.
Seems hard/impossible to do. Even if US and CCP were trying to stop Chinese citizens and companies doing LLM stuff
China's biggest challenge ATM is that they do not yet economically produce the GPUs and RAM needed to train and use big models. They are still behind in semiconductors, maybe 10 or 20 years (they can make fast chips, they can make cheap chips, they can't yet make fast cheap chips).
The real possibility exists that it would be better to be an independent 'second place' technology center (or third place, etc) than a pure-consumer of someone else's integrate tech stack.
China decided that a long time ago for the consumer web. Europe is considering similar things more than ever before. The US is considering it with TikTok.
It's not hard to see that expanding. It's hard to claim that forcing local development of tech was a failure for China.
Short of a breakthrough that means the everyday person no longer has to work, why would I rather have a "better" but wholly-foreign-country-owned, not-contributing-anything-to-the-economy-I-participate-in-daily LLM or image generator or what-have-you vs a locally-sourced "good enough" one?
this is the exact same story told to the public when Google was kicked out of China. you are just 15 years late for the party.
i dont believe they're as behind as many analysis deems. In fact, making it illegal to export western chips to china only serves to cause the mother of all inventions, necessity, to pressure harder and make it work.
You don't need the most efficient chips to train LLMs. those much slower chips (e.g. those made by Huawei) will probably take longer for training and they waste more electricity and space. but so what?
"China is set to account for the largest share of clean energy investment in 2024 with an estimated $675 billion, while Europe is set to account for $370 billion and the United States $315 billion."
https://www.reuters.com/sustainability/climate-energy/iea-ex...
Look at how competitive chinese EVs are, and no amount of tarriffs are gonna stop them from dominating the market - even if americans prevent their own market from being dominated, all of their allies will not be able to stop their own.
LLMs will become the ultimate propaganda tool (if they aren't already), and I don't see why governments wouldn't want to have full control over them.
At the beginning of ride sharing, people believed there was absolutely no geographical moat and all riders were just one cheaper ride from switching so better capitalized incumbents could just win a new area by showering the city with discounts. It took Uber billions of dollars to figure out the moats were actually nigh insurmountable as a challenger brand in many countries.
Honestly, with AI, I just instinctively reach for ChatGPT and haven't even bothered trying with any of the others because the results I get from OAI are "good enough". If enough other people are like me, OAI gets order of magnitudes more query volume than the other general purpose LLMs and they can use that data to tweak their algorithms better than anyone else.
Also, current LLMs, the long term user experience is pretty similar to the first time user experience but that seems set to change in the next few generations. I want my LLM over time to understand the style I prefer to be communicated in, learn what media I'm consuming so it knows which references I understand vs those I don't, etc. Getting a brand new LLM familiar enough to me to feel like a long established LLM might be an arduous enough task that people rarely switch.
The problem with ChartGPT is that that dont own any platform. Which means out of the 3 Billion Android + Chrome OS User, and ~1.5B iOS + Mac. They have zero. There only partner is Microsoft with 1.5B Window PC. Considering a lot of people only do work on Windows PC I would argue that personalisation comes from Smartphone more so than PC. Which means Apple and Google holds the key.
Also companies will be (and are) bundling these subscriptions for you, like Raycast AI, where you pay one monthly sum and get access to «all major models».
That is one of the reason why ChatGpt has a desktop App, so that users can directly interact with it and give access to users files/Apps as well.
First, get government regulation on your side. OpenAI has already looked for this, including Sam Altman testifying to Congress about the dangers of AI, but didn't get the regulations that they wanted.
Second, put the cost of competing out of reach. Build a large enough and good enough model that nobody else can afford to build a competitor. Unfortunately a few big competitors keep on spending similar sums. And much cheaper sums are good enough for many purposes.
Third, get a new idea that isn't public. For instance one on how to better handle complex goal directed behavior. OpenAI has been trying, but have failed to come up with the right bright idea.
And would AI that is tied to some interface that provides lock-in even be qualified to be called general? I have trouble pointing my finger on it, but AGI and lock-in causes a strong dissonance in my brain. Would AGI perhaps strictly imply commodity? (assuming that more than one supplier exists)
The exponential gains would come from increasing penetration into existing labor forces and industrial applications. The first arriver would have an advantage in being the first to be profitably and practically applicable to whatever domain it's used in.
I was very confused at this point because I haven't really seen X as a competitor to Google's ad business, at least not in investment and value prop... Then I saw you were using X as a variable...
Only if they are much cheaper than the equivalent work done by humans, but likely the first AGI will be way more expensive than humans.
Alas I am still without my mythical city of gold.
That's half the point of OpenAI's game of pretending each new thing they make is too dangerous to release. It's half directed at investors to build hype, half at government officials to build fear.
They don’t need to make a moat for AI, they need to make a moat for the OpenAI business, which they have a lot of flexibility to refactor and shape.
Patents. OpenAI already has a head start in the game of filing patents with obvious (to everybody except USPTO examiners), hard-to-avoid claims. E.g.: https://patents.google.com/patent/US12008341B2
By knowing a lot about me, like the details of my relationships, my interests, my work. The LLM would then be able to be better function than the other LLMs. OpenAI already made steps in that direction by learning facts about you.
By offering services only possible by integrating with other industries, like restaurants, banks, etc... This take years to do, and other companies will take years to catch up, especially if you setup exclusivity clauses. There's lots of ways to slow down your competitors when you are the first to do something.
Alternatively, a model that takes a year and the output of a nuclear power plant to train (and then you can tell them about your tricks, since they aren't very reproducible).
Also, I suspect that the next breakthrough will be kept under wraps and no papers will be published explaining it.
The competitions are mostly too narrow(programming/workflow/translation etc) and not interesting.
Or maybe nvidia has the moat. Or silicon fabs have it.
Meta and X proven surprisingly resilient in the face of pretty overwhelming negative sentiment. Google maintains monopoly status on Web Search and in Browsers despite not being remarkably better than competition. Microsoft remains overwhelmingly dominant in the OS market despite having a deeply flawed product. Amazon sells well despite a proliferation of fake reviews and products. Netflix thrives even while cutting back sharply on product quality. Valve has a near-stranglehold on PC games distribution despite their tech stack being trivially replicable. The list goes on.
(And a lot of stores like Ubisofts or EA's were very feature lite tbh.)
They have had years and still not even close, from both the consumer side and the developer side.
People citing their current high prices would be right. But human brains are smarter than chatgpt, and vastly more energy efficient. So we know it's possible.
Does this oversimplify?
Amazon also has a significant advantage in its logistics that underpin their entire business across the globe and that nobody else can match.
You're also wrong about how Google maintains its monopoly, or Microsoft.
All I see is bias and an unwillingness to understand, well, any of the relevant topics.
A vertically integrated system that people depend on with non-portable integrations is a moat.
Regulatory Capture is a moat.
I suppose competitor would have to be really good on those times most users need something better.
So it is the brand and familiarity. It would need to get really bad even on most basic things to be replaced.
Maybe they were just too early, later on it turned out that the browser is indeed a very valuable and financially sound investment. For Google at least.
So having a dominant market share can indeed be even if the underlying tech is not exactly unobtainable by others.
Once this bubble bursts, local inference will become even more affordable than it already is. There is no way that there will be a moat around running models as a service.
---
Similarly, there probably won't be a "data moat". The whole point of large foundation models is that they are great priors. You need relatively few examples to fine tune an LLM or diffusion model to get it to do what you want. So long as someone releases up to date foundation models there is no moat here either.
and they are probably going to go with regulatory moat over technical moat.
See the scramble to make themselves arbiters of "good AI" with regulators around the world. It's their only real hope but I think the cat's already out of the bag.
But greeed isgood!
OpenAI's C-suite: Well, they earned the C, but it was a letter grade.
What a profoundly unimaginative strategy. No matter the industry, a large-scale diversion of resources towards a moonshot goal will likely get you to that goal. They haven't made an argument as to why we should do that just for them, especially with all of the other alternatives.
And no, advertising your previously secret testing models (e.g. o3) as if they were market competitors is not how to prove they should have our money.
Then why do they look about average to Anthropic, Mistral, DeepSeek, and many others of that cohort despite having x100 the amount of resources?
What “we”? They’ve already raised billions, and I suspect they’re about to succeed at raising tens of billions more, despite the skepticism of random HN users.
The Unreasonable Effectiveness of Huge Sums of Money in the Natural Sciences
Unimaginable Sums of Money Considered Harmful
[0] https://archive.seattletimes.com/archive/19970102/2516784/sp...
The long term future of LLMs sure looks like the LLMs themselves will be commodities and the real value will lie in the use of those LLMs to deliver value
AI progress is driven by strong valuable data. Detailed important conversation with a chatbot are much more valuable that quick search queries. As LLMs extend past web UIs, there is even more interaction data to capture and learn from.
The company that captures the most human-AI interaction data will have a TREMENDOUS moat.
If I have the LLM translate a text from French to English... what is there to learn from that? Maybe the translation is great maybe it's awful, but there's no "correct" translation to evaluate the LLM against to improve it.
If I ask the chatbot for working code and it can't provide it, again, there's no "correct" code to train it against found in the conversation.
If I ask an LLM to interpret a bible passage, whether it does a good job or a terrible job there's no "correct" answer that the provider has to use as the gold standard, just the noise of people chatting with arbitrary answers.
When the big companies say they're running out of data, I think they mean it literally. They have hoovered up everything external and internal and are now facing the overwhelming mediocrity that synthetic data provides.
>> When the big companies say they're running out of data, I think they mean it literally. They have hoovered up everything external and internal and are now facing the overwhelming mediocrity that synthetic data provides.
> Digital data are only a tiny part of the influx of information that people interact with.
So what people bothers writing down are just a pale reflection of what has been, the reader has to relies on his experience and imagination to recreate it. If we take drawing for example, you may read all the books on the subject, you still have to practice to properly internalize that knowledge. Same with music, or even pure science (the axioms you start with are grounded in reality).
I believe LLMs are great at extracting patterns from written text and other forms of notation. They may be even good at translating between them. But as anyone who is polyglot may attest, literal translation is often inadequate because lot of terms are not equivalent. Without experiencing the full semantic meaning of both, you'll always be at risk at being confusing.
With traditional software, we were the ones providing meanings so that different tools can interact with each other (when I click this icon, a page will be printed out). LLMs are mostly translation machines, but with just a thin veneer of syntax rules and terms relationships, but with no actual meaning, because of all the information that they lack.
As another example, LLMs are kind of magical when it comes to what I'd call "bad memory spelunking". Is there a video game, book, or movie from your childhood, which you only have some vague fragments of, which you'd like to rediscover? Format those fragments into a request for a list of candidates, and if your description contains just enough detail, you will activate that semantic understanding to uncover what you were looking for.
I'd encourage you to check out 3blue1brown's LLM series for more on this!
I think it's true they lack a lot of information and understanding, and that they probably won't get better without more data, which we are running out of. That's sort of the point I was originally trying to make.
Is that actually true?
I disagree, that phenomenon is tied to the social network like phenomena we saw with WhatsApp and Facebook and to the aggregator business model of Amazon and Google.
Mathematically we can describe the process by which these monopolies form as nodes joining graphs:
Each node selects graphs to connect to in such a way that the probability is proportional to the number of nodes in the graph.
Sure Amazon and google feature two types of nodes, but the observation still fits: Selling on Amazon makes sense if there are many customers, buying on Amazon makes sense if there are many offers.
OpenAIs business does not have this feature, it does not get intrinsically get more attractive by having more users.
Models are commodities, even in the case Open ai goes through another major breakthrough nothing can stop some of their employees to run to other companies or founding their own and replicating or bettering the OpenAI results.
In fairness I realize that I don't use any of OpenAI's models. There are better alternatives for coding, translating or alternatives that are simply faster or cheaper (Gemini) or more open.
The best moats are scale (Walmart), brand (“Google” means search), network effects (Facebook, TikTok). None of those are perfect but all are better than just having better tech.
OTOH, WD40 has a technical moat since 1953, without a patent. There are a number of companies who rely on technical moat mixed with excellent technical image: Makita or Milwaukee Tool come to mind. You can have also a company based on technology moat that theoretically shouldn't exist (patents expired, commodity products), but it does (Loctite, 3M).
Eight racks of GB200's? You're talking about 1 megawatt of power. Well over that if you include PSU losses, cooling, and all the networking, storage, memory...
Yet Chrome for Google did help create a moat.
A moat that’s is so strong the DoJ is investigating if Chrome should be a forced divesture from Google/Aplhabet.
Note: I do generally agree with the article, but this also shows why you shouldn’t use analogies to reason.
The result is Chrome, Google invests in it because they don't want to be intermediation between them and eyeballs - it's a strategic play, not necessarily a direct money maker. So that's why they do it. But there is no moat, they have the number 1 browser because they spend enormous sums of money hiring top engineers to build the number 1 browser.
And this has been true of the browser for a long time, Internet Explorer won because they forced people to use it and used a tonne of anti-competitive practices to screw the competition. Firefox still managed to disrupt that simply by building a better product. This is the clear evidence there is no moat - the castle has been stormed repeatedly.
OpenAI can't do anything comparable to Google at this point because they have no other product. If anything they're more like Netscape.
The so-called open source models are getting better and better and even if OpenAI suddenly discovered some new tech that would allow for another breakthrough, it will be immediately picked up by others.
The data backs it up, Anthropic make most of their revenue from API while ChatGpt makes most of its revenue from the plus plan.
This is getting so repetitive now that it is stated as a truism.
Isn't it the same bet yahoo was betting on in 2000 that it would win because their product branding is better? And now, Yahoo's and Microsoft's search engine is worse than Google from 2 decades ago.
The word "open" is still under threat in this scenario too.
Claude is noticeably better
LLMs and AI is not where Meta adds value to the market. They add value via advertising. Therefore, AI is not Meta's product, and thus Meta should aim to make AI as cheap as possible.
They did the same thing for infra. Meta doesn't sell cloud computing, so they want to make infra as cheap as possible.
LLMs in their current form are very useful tools for many people. Most programmers I know use them daily. I have even friends who aren't tech savvy who paid the subscription to ChatGPT and use it all the time.
SamA wants to take everything the non-profit built and use it directly in HIS for profit enterprise. Fuck him.
The question is - why would that one company want this hot take out there?
Granted I'm not sure there's much of anything to substantiate this, but I imagine Apple would know better from competitive intel.
(I feel a bit bad that my mind also immediately connects "Gruber" to "Apple" in less-than-savory ways, but alas, that's the reputation he built for himself.)
Yes this! I won't say how I've come to know, but if you have ever wondered "is this guy on the take from Apple as part of their fuckery?" - the answer is Yes.
Well trodden and failing path. You misread the article if you considered it any endorsement of OpenAI’s plan.
> OpenAI’s board now stating “We once again need to raise more capital than we’d imagined” less than three months after raising another $6.6 billion at a valuation of $157 billion sounds alarmingly like a Ponzi scheme
Ponzi scheme. If he's shilling something I guess it's "shorting".
I don't know who this Gruber guy is, but it's relieving to hear at least someone talk some sense about the ludicrous levels of investment being poured into something that most users go "heh that's neat" and would never consider paying for.
https://www.theguardian.com/technology/2024/jan/08/ai-tools-...
OpenAI has the potential to create the next groundbreaking innovation, like the iPhone. Who needs apps when the AI itself can handle everything?
Uh, they were advertising in 1999 already?
You might find their perspective on advertising surprising.
Returning to my earlier point, please expand your argument about why OpenAI may fail. Let’s aim to elevate the discussion.
OpenAI, in contrast, still doesn't have any notion as to how to make it all profitable.
I’d like to see more thoughtful discussion here. The debate often feels stuck between oversimplified takes like “It’s just a commodity” and “No, it’s a game changer!” Let’s assume for a moment that OpenAI operates as a commodity business but still figures out how to become profitable, even with small margins. After all, US Steel was a commodity and became hugely successful.
Given these assumptions, why do you think OpenAI might not succeed? Let’s move the conversation forward.
[citation needed]
Google's goal was always advertising just like every other search engine.
https://blogs.cornell.edu/info2040/2019/10/28/the-academic-p...
You might find their perspective on advertising surprising.
Returning to my earlier point, please expand your argument about why OpenAI may fail. Let’s aim to elevate the discussion.
Nothing in the paper states or suggests they are anti-ads. Hell, Ads can even work well with their description of page rank. Display ads along side the search results and if a user clicks it then rate the ad higher and if they don't down weight it akin to what you're doing with non-ad links.
> Returning to my earlier point, please expand your argument about why OpenAI may fail. Let’s aim to elevate the discussion.
Because the counterpoint of Google isn't applicable. You can't be like "X is ok because it's similar to Y and Y was ok.". X (OpenAI) isn't similar to Y (Google); Google always planned to do Ads and it's just their stance of keeping them distinct from results that changed over time. A better argument would to pick a Y of Reddit or something that took years to actually generate revenue.
OpenAI should be fine for quite some time though because what else is there to burn billions of dollars investing for people?
[1]: https://snap.stanford.edu/class/cs224w-readings/Brin98Anatom...
I have friends who interviewed in 1999, and all of them came away with the same view - they did not know how the company would make money. They all poked at this topic, and the founders were explicit they really did not want to sell ads. Two of them refused to join because they thought it would fail as a result[1]. So while they did end up doing that, it seems very clear they did not want to go that path if they could avoid it.
If you have actual evidence that suggests this is not the case, rather than the random unsupported conjecture that it was "always their goal", i'd love to hear it!
I've not met a single person who says anything but what i just said - it was not always their goal, they had no desire to do it.
[1] Amusingly both ended up founding and selling companies for over 100m, so they did okay in the end despite theoretically missing a huge opportunity ;)
Then they went full corporate and their real turning point, came in 2007 with their acquisition of DoubleClick, about 12 years after Larry Page and Sergey Brin famously claimed that advertising and search engines don’t mix.
Now compare that to OpenAI. It launched in 2015, nine years ago, but ChatGPT didn’t arrive until 2022—less than two years ago. Google took over a decade to transform its core business with ads and figure out how to become a profit machine.
Let's practice some intellectual humility and acknowledge that OpenAI still has time to work this out. History shows that even if it takes some time, that's perfectly fine.