Simon, thank you for continuiing to share your findings in this space. I really appreciate the concise, no nonsense summaries. Thanks to your work I feel I can keep up with what’s going on. Also thanks for the ‘llm’ tool, I use it all the time.
One question I’d love to be able to answer relates to the low cost of some of these models (relevant as you note how cheap the new Amazon nova-lite model is - 1/100th of a cent to provide a text summary of an image). Friends who don’t know much about computing have read on social media about the huge energy usage of ‘AI’. When I mention that I use LLMs almost daily they look at me with disdain and assume that I’m burning the planet. My feeling is that something that costs a fraction of a cent can’t be using much electricity. Do we know if the current pricing is reflective of energy use or are these products loss leaders?
For those who didn’t read the article, here is the final line: “Maybe we need a new FAANG acronym that covers OpenAI, Anthropic, Google, Meta and Amazon. I like GAMOA.”
Because they haven’t produced their own GPT-4 class model yet. I like the Phi series but they’re not at the same level as GPT-4o/Claude 3.5 Sonnet/Gemini 1.5 Pro yet.
Simon, thank you for continuiing to share your findings in this space. I really appreciate the concise, no nonsense summaries. Thanks to your work I feel I can keep up with what’s going on. Also thanks for the ‘llm’ tool, I use it all the time.
One question I’d love to be able to answer relates to the low cost of some of these models (relevant as you note how cheap the new Amazon nova-lite model is - 1/100th of a cent to provide a text summary of an image). Friends who don’t know much about computing have read on social media about the huge energy usage of ‘AI’. When I mention that I use LLMs almost daily they look at me with disdain and assume that I’m burning the planet. My feeling is that something that costs a fraction of a cent can’t be using much electricity. Do we know if the current pricing is reflective of energy use or are these products loss leaders?
I heard from a good source who says they heard from a good source that Amazon are not running inference for the Nova models at a loss: https://bsky.app/profile/quinnypig.com/post/3lciltevbgk2l
That is indeed a good source!
[Comment removed by author]
For those who didn’t read the article, here is the final line: “Maybe we need a new FAANG acronym that covers OpenAI, Anthropic, Google, Meta and Amazon. I like GAMOA.”
Maybe we shouldn’t? More often than not, naming things is what create them, like FAANG and BRICS.
I’d prefer OMAGA.
Why exclude Microsoft, who are essentially bankrolling OpenAI and is the company most visibly offering GenAI to consumers and businesses?
Because they haven’t produced their own GPT-4 class model yet. I like the Phi series but they’re not at the same level as GPT-4o/Claude 3.5 Sonnet/Gemini 1.5 Pro yet.
Amazon thanks the others for doing market research for them