1. 18
  1.  

    1. 7

      Simon, thank you for continuiing to share your findings in this space. I really appreciate the concise, no nonsense summaries. Thanks to your work I feel I can keep up with what’s going on. Also thanks for the ‘llm’ tool, I use it all the time.

      One question I’d love to be able to answer relates to the low cost of some of these models (relevant as you note how cheap the new Amazon nova-lite model is - 1/100th of a cent to provide a text summary of an image). Friends who don’t know much about computing have read on social media about the huge energy usage of ‘AI’. When I mention that I use LLMs almost daily they look at me with disdain and assume that I’m burning the planet. My feeling is that something that costs a fraction of a cent can’t be using much electricity. Do we know if the current pricing is reflective of energy use or are these products loss leaders?

      1. 6

        I heard from a good source who says they heard from a good source that Amazon are not running inference for the Nova models at a loss: https://bsky.app/profile/quinnypig.com/post/3lciltevbgk2l

        1. 2

          That is indeed a good source!

      2. [Comment removed by author]

    2. 1

      For those who didn’t read the article, here is the final line: “Maybe we need a new FAANG acronym that covers OpenAI, Anthropic, Google, Meta and Amazon. I like GAMOA.”

      1. 4

        Maybe we shouldn’t? More often than not, naming things is what create them, like FAANG and BRICS.

      2. 3

        I’d prefer OMAGA.

      3. 2

        Why exclude Microsoft, who are essentially bankrolling OpenAI and is the company most visibly offering GenAI to consumers and businesses?

        1. 1

          Because they haven’t produced their own GPT-4 class model yet. I like the Phi series but they’re not at the same level as GPT-4o/Claude 3.5 Sonnet/Gemini 1.5 Pro yet.

    3. 1

      Amazon thanks the others for doing market research for them