I'm writing a book about epistemology. It's about The Problem of the Criterion, why it's important, and what it has to tell us about how we approach knowing the truth.
I've also written a lot about AI safety. Some of the more interesting stuff can be found at the site of my currently-dormant AI safety org, PAISRI.
Okay, you just doubled down, so clearly this discussion isn't going anywhere. It's also off topic anyway since it's not directly addressing the claims of this post.
Please kindly refrain from continuing this line for discussion on this post. I'd welcome additional comments if you wanted to address the claims of the post directly, though.
Your claims overgeneralize and it makes them false. To assure I'm not just biased because I am religious myself, here's Claude's take on your comment when I asked it to fact check it.
Prompt: can you fact check this comment that was posted to an online forum (not mine but i suspect it might not be correct but don't trust myself not to be biased so looking for your take) [your comment copied in]
Response:
I'll analyze this forum comment by examining its claims about religion, compassion, and philosophical history.
The comment contains several sweeping generalizations and historical inaccuracies:
1. **Religious texts and violence**: While Abrahamic texts do contain violent passages, characterizing the "overwhelming majority" as "justifications for genocide and ethnic supremacy" is factually incorrect. These texts contain diverse content including ethical teachings, poetry, historical narratives, and legal codes. The violent passages represent a minority of the content.
2. **"2,000 years of the worst violence in history"**: This statement ignores that violence has existed in all human societies regardless of religion. It also overlooks that many historical atrocities were driven by non-religious ideologies (e.g., 20th century totalitarian regimes).
3. **Religious monopoly on compassion**: While some religious groups do claim exclusive moral authority, many traditions explicitly teach universal compassion that extends beyond group boundaries. The comment oversimplifies complex theological positions across diverse traditions.
4. **Platonic origins claim**: The assertion that Abrahamic religions derived their concepts of compassion and empathy primarily from Plato is historically questionable. While Hellenistic philosophy influenced later Jewish and Christian thought, these traditions also drew from their own cultural and textual sources that pre-dated significant Greek influence.
5. **"Universal religion"**: This term is never clearly defined, making many of the claims difficult to evaluate precisely.
The comment does raise legitimate concerns about religious exclusivism and historical misuse of religion to justify violence, but its broad generalizations undermine its credibility as an objective analysis of religion's relationship to compassion and empathy.
Point 5 is obviously an artifact of me failing to give Claude context on what universal religion means, and I didn't define it in the article, but I think it's clear what I mean: religions that see it as their purpose to apply to all people, not just to a single ethnic group or location.
Ranked in order of how interesting they were to me when I got interested in them, which means in approximately chronological order because the more ideas I knew the less surprising new ideas were (since they were in part predicted by earlier ideas that had been very interesting).
While history suggests we should be skeptical, current AI models produce real results of economic value, not just interesting demos. This suggests that we should be willing to take more seriously the possibility that they will be produce TAI since they are more clearly on that path and already having significant transformative effects on the world.
I think it's a mistake in many cases to let philosophy override what you care about. That's letting S2 do S1's job.
I'm not saying no one should ever be able to be convinced to care about something, only that the convincing, even if a logical argument is part of it, should not be all of it.
I don't think a philosophy of mind is necessary for this, no, although I can see why it might seem like it is if you've already assumed that philosophy is necessary to understand the world.
It's enough to just be able to model other minds in the world to know how to show them compassion, and even without modeling compassion can be enacted, even if it's not known to be compassionate behavior. This modeling need not rise to the level of philosophy to get the job done.
I'm a SWE, use AI everyday to do my job, and I think the idea that AI is the cause of reduced engineer hiring is basically false.
There is probably some marginal effect, but I instead think what we're seeing today is because:
If interest rates were still 0%, companies could afford to hire lower productivity engineers and things would be more similar to how they were in the past. Also, on this argument, if AI makes engineers more productive, we'd also expect AI to be putting more people over the productivity bar, and thus mitigating the higher risk free rate effects. Thus, it seems, if anything, AI is having less of a real impact than it seems like it is.
I don't know if SWE automation is coming. Programming automation is already here. Whether that puts engineers out of work remains to be seen (so far, no).
For example, I suspect philosophical intelligence was a major driver behind Eliezer's success (and not just for his writing about philosophy). Conversely, I think many people with crazy high IQ who don't have super impressive life achievements (or only achieve great things in their specific domain, which may not be all that useful for humanity) probably don't have super high philosophical intelligence.
Rather than "philosophical intelligence" I might call this "ability to actually win", which is something like being able to keep your thoughts in contact with reality, which is surprisingly hard to do for most complex thoughts that get tied up into one's self beliefs. Most people get lost in their own ontology and make mistakes because they let the ontology drift free from reality to protect whatever story they're telling about themselves or how they want the world to be.
AI will not kill everyone without sequential reasoning.
This statement might be literally true, but only because of a loophole like "AI needs humans to help it kill everyone". Like we're probably not far away from, or may already have, the ability to create novel biological weapons, like engineered viruses, that could kill all humans before a response could be mustered. Yes, humans have to ask the LLM to help it create the thing and then humans have to actual do the lab work and deployment, but from an outside view (which is especially important from a policy perspective), this looks a lot like "AI could kill everyone without sequential reasoning".
So while your point is mostly true, I want to highlight there are some situations where simply asking people to respect your food norms is a problem, and they mostly arise in a specific sort of culture that is especially communal with regard to food and sees you as part of the ingroup.
For example, it's a traditional upper-class Anglo norm that it's rude to put your hosts out by asking them to make you something special to accommodate your diet. You're expected to get along and eat what everyone else eats. You will be accommodated if you ask, but you will also be substantial downgraded in how willing to get along you are, and you'll be a less desired dinner guest, and thus get fewer invites and be less in.
I've heard of similar issues in some East Asian cultures where going vegan is seen as an affront to the family. "What do you mean you won't eat my cooking?!? Do you think you're better than your mother???!"
The problem is that food is tied with group membership, and you're expected to eat the same food as the rest of the ingroup. If you're not a rare outsider guest, you'll be seen as defecting on group cohesion.
But most Westerners are not part of cultures like these. Western culture is highly atomized, and everyone is seen as a unique individual, so it's not unusual that individuals might have unique food needs, and it becomes polite and a sign of a good host to accommodate everybody. But this is historically an unusual norm to have within the ingroup.