Well, it seems quite important whether the DROS registration could possibly have been staged.
That would be difficult. To purchase a gun in California you have to provide photo ID[1], proof of address[2] and a thumbprint[3]. Also it looks like the payment must be trackable[4] and gun stores have to maintain video surveillance footage for up to year.[5]
My guess is that the police haven't actually invested this as a potential homicide, but if they did, there should be very strong evidence that Balaji bought a gun. Potentially a very sophisticated actor could fake this evidence but it seems challenging (I can't find any historical examples of this happening). It would probably be easier to corrupt the investigation. Or the perpetrators might just hope that there would be no investigation.
There is a 10-day waiting period to purchase guns in California[5], so Balaji would probably have started planning his suicide before his hiking trip (I doubt someone like him would own a gun for recreational purposes?).
Is the interview with the NYT going to be published?
I think it's this piece that was published before his death.
Is any of the police behavior actually out of the ordinary?
Epistemic status: highly uncertain: my impressions from searching with LLMs for a few minutes.
It's fairly common for victim's families to contest official suicide rulings. In cases with lots of public attention police generally try to justify their conclusions. So we might expect the police to public state if there is footage of Balaji purchasing the gun shortly before his death. It could be that this will still happen with more time or public pressure.
land in space will be less valuable than land on earth until humans settle outside of earth (which I don't believe will happen in the next few decades).
Why would it take so long? Is this assuming no ASI?
As in, this is also what the police say?
Yes, edited to clarify. The police say there was no evidence of foul play. All parties agree he died in his bathroom of a gunshot wound.
Did the police find a gun in the apartment? Was it a gun Suchir had previously purchased himself according to records? Seems like relevant info.
The only source I can find on this is Webb, so take with a grain of salt. But yes, they found a gun in the apartment. According to Webb, the DROS registration information was on top of the gun case[1] in the apartment, so presumably there was a record of him purchasing the gun (Webb conjectures that this was staged). We don't know what type of gun it was[2] and Webb claims it's unusual for police not to release this info in a suicide case.
This is an attempt to compile all publicly available primary evidence relating to the recent death of Suchir Balaji, an OpenAI whistleblower.
This is a tragic loss and I feel very sorry for the parents. The rest of this piece will be unemotive as it is important to establish the nature of this death as objectively as possible.
I was prompted to look at this by a surprising conversation I had IRL suggesting credible evidence that it was not suicide. The undisputed facts of the case are that he died of a gunshot wound in his bathroom sometime around November 26 2024. The police say it was a suicide with no evidence of foul play.
Most of the evidence we have comes from the parents and George Webb. Webb describes himself as an investigative journalist, but I would classify him as more of a conspiracy theorist, based on a quick scan of some of his older videos. I think many of the specific factual claims he has made about this case are true, though I generally doubt his interpretations.
Webb seems to have made contact with the parents early on and went with them when they first visited Balaji's apartment. He has since published videos from the scene of the death, against the wishes of the parents[1] and as a result the parents have now unendorsed Webb.[2]
List of evidence:
Evidence against:
My interpretations:
Overall my conclusion is that this was a suicide with roughly 96% confidence. This is a slight update downwards from 98% when I first heard about it and overall quite concerning.
I encourage people to trade on this related prediction market and report further evidence.
Useful sources:
I'm not linking to this evidence here, in the spirit of respecting the wishes of the parents, but this is an important source that informed my understanding of the situation.
Source: Poornima Ramarao (11:22)
Source: Poornima Ramarao (12:38)
Source: Poornima Ramarao (13:02)
Source: Poornima Ramarao (15:47)
Source: Poornima Ramarao (16:36)
Source: George Webb + Poornima Ramarao (1:45)
Source: George Webb (9:56)
Source: (23:27)
Source: (8:02)
Source: (26:00)
Source: George Webb + Poornima Ramarao (0:35)
Source: George Webb (6:53)
Source: George Webb (3:38)
Source: George Webb (5:44)
Source: George Webb (5:46)
Source: George Webb (0:05)
Source: George Webb (6:23)
Source: George Webb (9:12)
Source: Poornima Ramarao (1:18)
Source: George Webb (9:45)
Source: Poornima Ramarao (4:30)
Source: George Webb (9:30)
Source: Poornima Ramarao (2:40)
Source: Poornima Ramarao (4:14)
Source: Poornima Ramarao (12:42)
Source: Ramamurthy (17:37)
Source: George Webb (13:29)
Source: George Webb (5:43)
Has someone made an ebook that I can easily download onto my kindle?
I'm unclear if a good ebook should include all the pictures from the original version.
LLMs can pick up a much broader class of typos than spelling mistakes.
For example in this comment I wrote "Don't push the frontier of regulations" when from context I clearly meant to say "Don't push the frontier of capabilities" I think an LLM could have caught that.
LessWrong LLM feature idea: Typo checker
It's becoming a habit for me to run anything I write through an LLM to check for mistakes before I send it off.
I think the hardest part of implementing this feature well would be to get it to only comment on things that are definitely mistakes / typos. I don't want a general LLM writing feedback tool built-in to LessWrong.
The ideal version of Anthropic would
In practice I think Anthropic has
What I would do differently.
My understanding is that a significant aim of your recent research is to test models' alignment so that people will take AI risk more seriously when things start to heat up. This seems good but I expect the net effect of Anthropic is still to make people take alignment less seriously due to the public communications of the company.
Some people are asking for a source on this. I'm pretty sure I've heard it from multiple people who were there in person but I can't find a written source. Can anyone confirm or deny?