NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Show HN: Chorus, a Mac app that lets you chat with a bunch of AIs at once (melty.sh)
maroonblazer 2 days ago [-]
Just tried this with an interpersonal situation I'm going through. The default seems to be Claude 3.5 Sonnet and ChatGPT-4o. I got the results I've come to expect from those two, with the latter better at non-programming kinds of prompts.

The app presented the option of prompting additional models, including Gemini Flash 2.0, one I'd never used before. It gave the best response and was surprisingly good.

Curious to know how Chorus is paying for the compute, as I was expecting to have to use my own API keys.

benatkin 2 days ago [-]
Some throttling plus having a limited number of users by it being a desktop app perhaps.

I just checked to see if it was signed, without running it. It is. I don't care to take the risk of running it even if it's signed. If it were a web app I'd check it out.

I don't know if there's any sort of login. With a login, they could throttle based on that. Without a login, it looks like they could use this to check if it's being used by an Apple computer. https://developer.apple.com/documentation/devicecheck/valida...

sunnybeetroot 2 days ago [-]
DeviceCheck is not available for macOS apps, see the following documentation: https://developer.apple.com/documentation/devicecheck/dcappa...
benatkin 1 days ago [-]
I see. I had checked attestKey and it says "Mac Catalyst 14.0+ | macOS 11.0+" among others, but that just means the API is present. developer.apple.com/documentation/devicecheck/dcappattestservice/attestkey(_:clientdatahash:completionhandler:)
owenpalmer 2 days ago [-]
Do they have really strict rate limits? How much did you use it?
Charlieholtz 2 days ago [-]
Hi! One of the creators of Chorus here. Really cool to hear how everyone is using it. We made this as an experiment because it felt silly to constantly be switching between the ChatGPT, Claude, and LMStudio desktop app. It's also nice to be able to run models with custom system prompts in one place (I have Claude with a summary of how CBT works that I find pretty helpful).

It's a Tauri 2.0 desktop app (not Electron!), so it is using the Mac's native browser view and a Rust backend. It also makes DMG size relatively small (~25mb but we can get it much smaller once we get rid of some bloat).

Right now Chorus is proxying API calls to our server, so it's free to use. We didn't add bring-your-own-api key to this version because it was a bit quicker to ship. This was kind of an experimental winter break project, so didn't think too hard about it. Likely will have to fix that (and add bring your own key? or a paid version?) as more of you use it :)

Definitely planning on adding support for local models too. Happy to answer any other questions, and any feedback is super helpful (and motivating!) for us.

UPDATE: Just added the option to bring your own API keys! It should be rolling out over the next hour or so.

d4rkp4ttern 2 days ago [-]
Curious to check it out but a quick question — does it have autocomplete (GitHub copilot-style) in the chat window. IMO one of the biggest missing feature in most chat apps is autocomplete. Typing messages in these chat apps quickly becomes tedious and autocompletions help a lot with this. I’m regularly shocked that it’s almost year 3 of LLMs (depending on how you count) and none of the big vendors have thought of adding this feature.

Another mind-numbingly obvious feature — hitting enter should just create a new-line. And cmd-enter should submit. Or at least have it configurable for this.

(EDITED for clarity)

Charlieholtz 2 days ago [-]
Enter does continue the chat! And shift-enter for new line.

My Mac now has built in copilot style completions (maybe only since upgrading to Sequoia?). They're not amazing but they're decent.

https://support.apple.com/guide/mac-help/typing-suggestions-...

d4rkp4ttern 2 days ago [-]
Sorry I meant hitting enter should NOT submit the chat. It should continue taking my input. And when I’m ready to submit I’d like to hit cmd-enter
gazook89 2 days ago [-]
I agree, but only personally. I would assume most people are on the “Enter to submit” train nowadays.

Most of my messaging happens on Discord or Element/matrix, and sometimes slack, where this is the norm. I don’t even think about Shift+Enter nowadays to do a carriage return.

hombre_fatal 2 days ago [-]
There are a lot of basic features missing from the flagship llm services/apps.

Two or so years ago I built a localhost web app that lets me trivially fork convos, edit upstream messages (even bot messages), and generate an audio companion for each bot message so I can listen to it while on the move.

I figured these features would quickly appear in ChatGPT’s interface but nope. Why can’t you fork or star/pin convos?

d4rkp4ttern 2 days ago [-]
The only editor I’ve seen that has both these features is Zed.
LorenDB 2 days ago [-]
If it's using Tauri, why is it Mac only?
Charlieholtz 2 days ago [-]
Only because I haven't tested it on Windows/Linux yet (started working on this last week!). But theoretically should be easy to package for other OS's
dcreater 2 days ago [-]
Airtrain.ai and msty.app have had this for a while.

What isn't there and would be useful is to not have them side by side but rather swipable. When you're using for code comparisons even 2 gets stuffy

kmlx 2 days ago [-]
imo even more useful would be to have a single answer that represents a mix of all the other answers (with an option to see each individual answer etc)
sdesol 2 days ago [-]
I have that in the chat app that I am working on.

https://beta.gitsense.com/?chat=51219672-9a37-442d-80a3-14d8...

It provides a summary of all the responses and if you click on "Conversation" in the user message bubble, you can view all the LLM responses to the question of "How many r's in strawberry".

You can fork the message as well and say create a single response based on all responses.

Edit: The chatting capability has been disabled as I don't want to incur an unwanted bill.

solomatov 2 days ago [-]
I would be much more likely to install this if it was published in the app store.
Charlieholtz 2 days ago [-]
Thanks for the feedback! I haven't tried to do this yet, but it's built on Tauri 2.0 and it looks not too hard (https://tauri.app/distribute/app-store/). Will take a look at this
desireco42 2 days ago [-]
There are good reasons not to publish on app store ie. if you want to actually make any money from the app
solomatov 2 days ago [-]
My main concern is security and privacy. App store apps are sandboxed but manually installed apps usually are not.
solomatov 2 days ago [-]
If you are small, the app store looks to me as the easiest solution for selling apps.
swyx 2 days ago [-]
also if u have gone thru the hell that is publishing and signing mac apps
ripped_britches 2 days ago [-]
Most popular Mac apps like Spotify, vscode, are not
n2d4 2 days ago [-]
Because they're big enough so they can afford not to, and they want to do things that the sandbox/review process/monetisation rules wouldn't let them. I assume the sandbox is exactly why parent wants the app to be there
yuppiepuppie 2 days ago [-]
I would have thought the exact opposite to your statement, they are big enough that they should afford it. Seems like the ability to forgo the app store on mac allows apple to get away with stuff like high friction review process and monetization rules. Without the big players pushing back, why would they change?
KetoManx64 2 days ago [-]
Doesn't apple charge app store apps 30% all their transactions/subscriptions? What company in their right mind would want to opt into that if they don't have to?
solomatov 2 days ago [-]
A smaller to a medium sized company. Due to several reasons:

- Setting up payment with a third party provider isn't that simple, and their fees are far from zero.

- Getting users. Popular queries in Google are full of existing results, and getting into there isn't easy and isn't cheap. Also, search engines aren't the most popular way to get apps to your devices, usually people search directly in app stores. Apple takes care of it, i.e. I guess that popular app with good ratings get to higher position in search results.

- Trust. I install apps on the computer without Apple only if I trust the supplier of the software (or have to have it there). Apple solves it with their sandboxing.

Yep, 30% are a lot, but for these kinds of businesses it might be well worth it (especially with reduced commission of 15% for smaller revenue).

mikae1 2 days ago [-]
Was hoping this would be a LM Studio alternative (for local LLMs) with a friendlier UI. I think there's a genuine need for that.

It could make available only the LLLMs that your Mac is able to run.

Many Silicon owners are sitting on very able hardware without even knowing.

7 hours ago [-]
Charlieholtz 1 days ago [-]
Just added support for Ollama and LM Studio servers! I'm getting 66 tokens/sec and 0.17s time to first token for Llama 3.1, it's pretty mind-blowing.

https://x.com/charliebholtz/status/1873798821526069258

wkat4242 2 days ago [-]
I don't know LM Studio but I really like OpenWebUI. Maybe worth a try.

I use it mainly because my LLM runs on a server, not my usual desktop.

lgas 2 days ago [-]
On that note, I recently learned from Simon Willson's blog that if you have uv installed, , you can try OpenWebUI via:

    uvx --python 3.11 open-webui serve
rubymamis 2 days ago [-]
This is exactly what I’m building at https://www.get-vox.com - it automatically detects your local models installed via Ollama.

It is fast, native and cross-platform (built with Qt using C++ and QML).

sunnybeetroot 2 days ago [-]
Try msty.app
HelloUsername 1 days ago [-]
rrr_oh_man 1 days ago [-]
There should not be more than one pizza restaurant
nomilk 2 days ago [-]
Love the idea. I frequently use ChatGPT (out of habit) and while it's generating, copy/paste the same prompt into claude and grok. This seems like a good way to save time.
sleno 2 days ago [-]
very well designed! how does this work? in the sense that i didn't have to copy/paste any keys and yet this is offering paid models for free.
Charlieholtz 2 days ago [-]
Thanks! Right now Chorus is proxying API calls to our server so it's free. This was kind of an experimental winter break project that we were using internally, and it was quicker to ship this way.

Likely going to add bring your own API keys (or a paid version) soon.

Update: just added option to bring your own keys! Should be available within an hour.

swyx 2 days ago [-]
if you are not paying... you are the product
KingMob 2 days ago [-]
As Cory Doctorow has documented, you're frequently still the product even when paying: https://pluralistic.net/2024/03/12/market-failure/#car-wars
Charlieholtz 2 days ago [-]
ahah true, but in this case, we're (melty) just paying right now. I wanted to make it really easy to try and didn't implement bring your own keys yet. I probably should ask O1 to fix that
cmiller1 2 days ago [-]
Is the name a Star Trek TNG reference? https://memory-alpha.fandom.com/wiki/Riva%27s_chorus
kanodiaashu 2 days ago [-]
This reminds me of the search engine aggregators in the old days that used to somehow install themselves on internet explorer and then collected search results from multiple providers and sometimes compared them. I wonder if this time these tools will persist.
rubymamis 2 days ago [-]
If you're looking for a fast, native alternative for Windows, Linux (and macOS), you can join my new app waitlist: https://www.get-vox.com
KetoManx64 2 days ago [-]
Is it going to be open source?
rubymamis 2 days ago [-]
I'm not sure. I thought about setting up a funding goal, after which I'll open source it.
prmoustache 2 days ago [-]
Or you can do that on your tmux terminal multiplexer using the synchronize-pane options.

A number of terminals can also do that natively (kitty comes to mind).

Lionga 2 days ago [-]
Dropbox ist just curlftpfs with SVN, in other words useless.
prmoustache 2 days ago [-]
I see what you did there.

But the actual amount of effort to get to the level of dropbox in a multiple device context is a number of magnitude higher than the triviality of autoloading a handful of cli tool in different panes and synchronizing them in tmux.

prmoustache 2 days ago [-]
example here: https://forge.chapril.org/prmoustache/examples/src/branch/ma...

Only 35 lines of code including empty lines and comments.

That approach is also dead simple to maintain, multiplatform and more flexible:

- separation of form and function: tmux handle the layout and sync, the individual tools handle the AI models.

- I can use remote machines with SSH

paul7986 2 days ago [-]
Cool and GPT/Claude think there are only 2 "r"s in strawberry?

Wow that's a bit scary (use GPT a lot) how bad a fail that is!

joshstrange 2 days ago [-]
I maintain that “2 ‘r’s” is a semi-valid answer. If a human is writing, pauses and looks up to ask that question they almost certainly want to hear “2”.
furyofantares 2 days ago [-]
A few days ago I was playing a trivia-ish game in which I was asked to spell "unlabeled", which I did. The questioner said I was wrong, that it "has two l's" (the U.K. spelling being "unlabelled"). I jokingly countered that I had spelled it with two l's, which she took to mean that I was claiming to have spelled it "unlabelled".
sdesol 2 days ago [-]
Here's more LLMs

https://beta.gitsense.com/?chats=ba5f73ac-ad76-45c0-8237-57a...

The left window contains all the models that were asked and the right window contains a summary of the LLM responses. GPT-4o mini got it right but the super majority got it wrong, which is scary.

It wasn't until the LLM was asked to count out the R's that it acknowledges that GPT-4o mini was the only one that got it right.

Edit: I've disabled chatting in the app, since I don't want to rack up a bill. Should have mentioned that.

pizza 2 days ago [-]
You’re asking them about letters meanwhile they’ve never seen any https://imgur.com/a/NPKJ5F2
lxgr 2 days ago [-]
Gell-Mann amnesia is powerful. Hope you extrapolate from that experience!

At a technical level, they don't know because LLMs "think" (I'd really call it something more like "quickly associate" for any pre-o1 model and maybe beyond) in tokens, not letters, so unless their training data contains a representation of each token split into its constituent letters, they are literally incapable of "looking at a word". (I wouldn't be surprised if they'd fare better looking at a screenshot of the word!)

robwwilliams 2 days ago [-]
Today: Claude

Let me count carefully: s-t-[r]-a-w-b-e-[r]-[r]-y

There are 3 Rs in "strawberry".

e1g 2 days ago [-]
This app uses Claude over the API, and that "In the word "strawberry" there are 2 r's.". Claude web chat is correct, though.
sdesol 2 days ago [-]
I will not be surprised if Open AI, Claude, Meta and others use the feedback system to drive corrections. Basically, if we use the API, we may never get the best answer, but it could also be true that all feedback will be applied to future models.
wiseowise 2 days ago [-]
[dead]
ranguna 2 days ago [-]
lmarena.ai is also pretty good. It's not mac exclusive, works from the browser and has a bunch of different AIs to choose from. It doesn't keep a history when you close the tab though
cryptozeus 2 days ago [-]
Thanks for simple landing page and most simple example anyone can understand.
whatever1 2 days ago [-]
Isn’t this cheating? What will the AI overlords think about this behavior once they take over things ?
sagarpatil 2 days ago [-]
msty.app does this and much more. It’s open source too.
pizza 2 days ago [-]
I looked at the site and it doesn't appear to be open source. https://msty.app/pricing
dcreater 2 days ago [-]
It's definitely not open source
theoreticalmal 2 days ago [-]
Free for personal use with lots of local data, but not open source
486sx33 2 days ago [-]
Sweet !
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 04:27:44 GMT+0000 (Coordinated Universal Time) with Vercel.