There’s a quote by Noam Chomsky which is something to the effect of … “When I presented my ideas to mathematicians, they evaluated me on those ideas. When I presented my ideas to social scientists, they wanted to know what credentials I had before listening to me”.
Seems like this could be a similar issue. It does seem that many philosophers are more like the latter social scientists – they want to know WHO said something, not simply if it’s true or not.
From my limited exposure, a lot of philosophy is really about the meaning of words, and who invented a term, and not what’s actually true. There are a lot of convoluted arguments that could be made a lot more precise with simpler language and empirical tests.
For example, much of what I’ve read about “AI alignment” – which DOES seem to be done by “professional philosophers” with prestigious degrees – suffers greatly from this. They start with questionable premises, and then invent an elaborate framework of knowledge on top of it, which is almost certainly wrong.
That is, the game is to build more and more rhetoric/provocation on top of this new set of terms, without considering whether the premises are correct. It’s better if what you’re saying isn’t falsifiable for a long time.
e.g. the book “Superintelligence” is like this, and the funny thing is that he almost admits as much in the preface. Similarly with the “longtermism” thing. It reminds me of this funny quote
My No. 1 worry is: what if we’re focussed on entirely the wrong things?” he said. “What if we’re just wrong? What if A.I. is just a distraction?
BTW I also remember reading this Aaronson article on a train over 10 years ago! I’m pretty sure I found it on Hacker News. I got about halfway through and thought it was brilliant. The rest was over my head.
And I should probably add that my comments probably come off as a bit harsh, and apply mainly to certain things that have gotten a lot of attention, and that I’ve had a longstanding interest in. I did qualify it with “from my limited exposure” :-)
Of note is that he is referring to political scientists, not philosophers, and Aaronson’s paper specifically targets philosophers closer to the natural sciences.
Yes, but from what I understand the OP is saying something similar could be the case here (the same scenario, but with a different group - the invariant is the basis for rejecting the idea, not the group rejecting it).
I can read that from the comment too… but it doesn’t really say “the majority of philosophers rejected Aaronson”, more that his paper didn’t garner much attention.
Aaronson is semi-controversial but not as controversial as Chomsky ;)
Thanks for digging it up, I should have done that :)
It does seem like he is referring to political science, which puts it in a different light … Political arguments are notorious for “attacking the speaker, not the idea”
But I do think it’s relevant regardless because it’s hard to bring knowledge in from different fields … and arguably this is more of a problem as time goes on, simply because there’s more knowledge and more people writing about it
From my limited exposure, a lot of philosophy is really about the meaning of words, and who invented a term, and not what’s actually true.
That’s the effect of the academy, AFAICT. It’s not like the olden days where Aquinas could believe in God and so did all his colleagues, and they debated which proofs of God’s existence worked or didn’t. Now you have one guy studying Aquinas, one guy studying Plato, one guy studying Derrida, etc. and they don’t agree on anything fundamental, so to come to a consensus, you just talk about the history of the ideas and not their actual truth. You do get some actual disputes, but it’s all on technical minutiae, like “is justification internal or external?” That sort of thing.
There are a lot of convoluted arguments that could be made a lot more precise with simpler language and empirical tests.
You do get some actual disputes, but it’s all on technical minutiae, like “is justification internal or external?” That sort of thing.
I think that this may be a bad example—or maybe I misunderstand your point. (I realize that you have a PhD in philosophy, but I’m also writing for others. I don’t mean to be teaching you things you don’t know about, e.g., the justified true belief theory of knowledge.)
A lot of people have thought that we can define knowledge as justified true belief. If we do that, we will need to understand what we mean by justified. Once we dig into that question, it is far from trivial whether justification is internal or external. I’m assuming we all agree that “What is knowlege?” is a real and significant question. Since we want to understand knowledge, and understanding justification seems required in order to understand one long-standing and influential account of knowledge, I think that whether justification is internal or external is a very real dispute and not merely technical minutiae. (As an additional point, I think that the arguments in favor of internalism and externalism themselves are interesting and well worth considering. There are strong reasons in favor of both, but both have problems. It’s a fascinating debate. YMMV, of course.)
Here’s a different way of putting this. People inside and outside of academic philosophy often long for the good old days when philosophy wasn’t so technical or hyper-specialized. (Lots of longing for the good old days when Bertrand Russell wrote on logic, ethics, and epistemology.) But I wonder whether that nostalgia isn’t parallel to (a hypothetical) longing for the good old days of Newtonian physics—you know, when physics was more straightforward and not so hyper-specialized. I don’t think we can (or should want to) go backwards. Philosophy is far more complicated and specialized, and this can be frustrating, but I don’t think that’s primarily or only because of (say) pressures to publish or sloppy thinking. The questions are genuinely, deeply difficult.
Bottom line: I am not convinced that contemporary philosophers have given up the pursuit of truth and replaced it with the history of ideas.
The key thing is context. Trying to avoid it brings problems. Just talk to the people you are talking to and stop stressing the global truth of your statements. See, I did it just now, I made a statement without a disclaimer and I trust that you understand that it is mostly in alignment with what I believe, not a global truth, but now that I’ve added this disclaimer I’m back to trying to climb the hill of global truth-y-ness.
Language is decentralized and ever changing. The western philosophy tradition is too obsessed with who was first or whatever instead of realizing that what we actually want is to pass the torch and teach the next generation our deeply held insights (without trapping them in minutiae and technical arguments, which can be important, but are also often a case of missing the forest for the trees).
I think the internalism/externalism debate qualifies as minutia in the sense that, it only makes sense as a question if you already accept the idea that knowledge is a justified true belief. We all agree that we want knowledge and it’s better to have knowledge than ignorance (or do we?), but only smaller number of people agree that knowledge is something like justified true belief. Among people who are debating JTB, there are Gettier problems and whatnot, and so they end up digging into well, what is justification anyway and how does that work? And then you can go further down and have various flavors of internalism/externalism and hybrid theories and controversies within them.
It’s fair though for someone else to look at the justification debate from outside and say, well, that’s all irrelevant because I have a pragmatic theory of knowledge, and JTB doesn’t apply, or I’m Nietzschean and “knowledge” is just the mask of will to power, etc. It would be really good if we could actually nail down all the big stuff before we really dig into the small stuff, but alas, that’s not how the hermeneutic circle works.
The chief obstacle, therefore, to our improvement in the moral or metaphysical sciences is the obscurity of the ideas, and ambiguity of the terms.[ …] And, perhaps, our progress in natural philosophy is chiefly retarded by the want of proper experiments and phaenomena, which are often discovered by chance, and cannot always be found, when requisite, even by the most diligent and prudent enquiry. As moral philosophy seems hitherto to have received less improvement than either geometry or physics, we may conclude, that, if there be any difference in this respect among these sciences, the difficulties, which obstruct the progress of the former, require superior care and capacity to be surmounted.
As for why it’s wrong, so far no attempt at doing philosophy in clear language has succeeded, but maybe we just need to give it another three hundred years.
There’s a quote by Noam Chomsky which is something to the effect of … “When I presented my ideas to mathematicians, they evaluated me on those ideas. When I presented my ideas to social scientists, they wanted to know what credentials I had before listening to me”.
This is the ideal we should all strive for and why anonymity is so important. Also why I believed datalisp would not require much rhetoric… although I should have realized that Ron Rivest was not enough to change the way we do the web so who am I? Still. The reason mathematicians behave this way is because they know that theorems hold up in reality.
I always wondered why it’s called complexity. Big O and NP are about resource requirements. To my lay understanding, complexity relates more to chaos and nonlineariry. Sensitive dependence on initial conditions doesn’t mean much when you can initialize all your variables.
Total guess: because each complexity class corresponds to a set of problems which can be solved that cheaply, and the bigger classes contain more difficult / complicated problems.
One finds […] that some computable sequences are very easy to compute whereas other computable sequences seem to have an inherent complexity that makes them difficult to compute.
There are many ways to measure complexity depending on the domain.
Kolmogorov complexity is about finding the smallest programa that produces a given string (the definition is a obviously more technical).
The VC dimension is a generalization of degrees of freedom, and a model with high VC dimension can classify more complex data sets.
The algorithmic complexity means that there is a hierarchy of difficulty, or complication, or solving problems, and problems that take less steps to solve are said to be simpler.
I like that VC dimension idea. To me a system is simple if an abstraction of it is much smaller than the thing itself. Think a crystal. Something is complex if it can’t be abstracted so easily. Think a mitochondrion. The same level of abstraction of a mitochondrion (patterns of atoms) is a sizable fraction of the thing itself. I don’t think that maps directly onto algorithmic complexity. I guess we just use the same word for somewhat different things.
As someone with a PhD in Philosophy, I read this when it came out and loved it, but as far as I know, it had no impact on the field.
There’s a quote by Noam Chomsky which is something to the effect of … “When I presented my ideas to mathematicians, they evaluated me on those ideas. When I presented my ideas to social scientists, they wanted to know what credentials I had before listening to me”.
Seems like this could be a similar issue. It does seem that many philosophers are more like the latter social scientists – they want to know WHO said something, not simply if it’s true or not.
From my limited exposure, a lot of philosophy is really about the meaning of words, and who invented a term, and not what’s actually true. There are a lot of convoluted arguments that could be made a lot more precise with simpler language and empirical tests.
For example, much of what I’ve read about “AI alignment” – which DOES seem to be done by “professional philosophers” with prestigious degrees – suffers greatly from this. They start with questionable premises, and then invent an elaborate framework of knowledge on top of it, which is almost certainly wrong.
That is, the game is to build more and more rhetoric/provocation on top of this new set of terms, without considering whether the premises are correct. It’s better if what you’re saying isn’t falsifiable for a long time.
e.g. the book “Superintelligence” is like this, and the funny thing is that he almost admits as much in the preface. Similarly with the “longtermism” thing. It reminds me of this funny quote
My No. 1 worry is: what if we’re focussed on entirely the wrong things?” he said. “What if we’re just wrong? What if A.I. is just a distraction?
from a long New Yorker article which I commented on here: https://news.ycombinator.com/item?id=32393700
BTW I also remember reading this Aaronson article on a train over 10 years ago! I’m pretty sure I found it on Hacker News. I got about halfway through and thought it was brilliant. The rest was over my head.
And I should probably add that my comments probably come off as a bit harsh, and apply mainly to certain things that have gotten a lot of attention, and that I’ve had a longstanding interest in. I did qualify it with “from my limited exposure” :-)
The full quote from Chomsky can be read here - http://www.autodidactproject.org/quote/chomsky1.html
Of note is that he is referring to political scientists, not philosophers, and Aaronson’s paper specifically targets philosophers closer to the natural sciences.
Yes, but from what I understand the OP is saying something similar could be the case here (the same scenario, but with a different group - the invariant is the basis for rejecting the idea, not the group rejecting it).
I can read that from the comment too… but it doesn’t really say “the majority of philosophers rejected Aaronson”, more that his paper didn’t garner much attention.
Aaronson is semi-controversial but not as controversial as Chomsky ;)
Thanks for digging it up, I should have done that :)
It does seem like he is referring to political science, which puts it in a different light … Political arguments are notorious for “attacking the speaker, not the idea”
But I do think it’s relevant regardless because it’s hard to bring knowledge in from different fields … and arguably this is more of a problem as time goes on, simply because there’s more knowledge and more people writing about it
That’s the effect of the academy, AFAICT. It’s not like the olden days where Aquinas could believe in God and so did all his colleagues, and they debated which proofs of God’s existence worked or didn’t. Now you have one guy studying Aquinas, one guy studying Plato, one guy studying Derrida, etc. and they don’t agree on anything fundamental, so to come to a consensus, you just talk about the history of the ideas and not their actual truth. You do get some actual disputes, but it’s all on technical minutiae, like “is justification internal or external?” That sort of thing.
Hume said the same thing. He was wrong. :-)
I think that this may be a bad example—or maybe I misunderstand your point. (I realize that you have a PhD in philosophy, but I’m also writing for others. I don’t mean to be teaching you things you don’t know about, e.g., the justified true belief theory of knowledge.)
A lot of people have thought that we can define knowledge as justified true belief. If we do that, we will need to understand what we mean by justified. Once we dig into that question, it is far from trivial whether justification is internal or external. I’m assuming we all agree that “What is knowlege?” is a real and significant question. Since we want to understand knowledge, and understanding justification seems required in order to understand one long-standing and influential account of knowledge, I think that whether justification is internal or external is a very real dispute and not merely technical minutiae. (As an additional point, I think that the arguments in favor of internalism and externalism themselves are interesting and well worth considering. There are strong reasons in favor of both, but both have problems. It’s a fascinating debate. YMMV, of course.)
Here’s a different way of putting this. People inside and outside of academic philosophy often long for the good old days when philosophy wasn’t so technical or hyper-specialized. (Lots of longing for the good old days when Bertrand Russell wrote on logic, ethics, and epistemology.) But I wonder whether that nostalgia isn’t parallel to (a hypothetical) longing for the good old days of Newtonian physics—you know, when physics was more straightforward and not so hyper-specialized. I don’t think we can (or should want to) go backwards. Philosophy is far more complicated and specialized, and this can be frustrating, but I don’t think that’s primarily or only because of (say) pressures to publish or sloppy thinking. The questions are genuinely, deeply difficult.
Bottom line: I am not convinced that contemporary philosophers have given up the pursuit of truth and replaced it with the history of ideas.
The key thing is context. Trying to avoid it brings problems. Just talk to the people you are talking to and stop stressing the global truth of your statements. See, I did it just now, I made a statement without a disclaimer and I trust that you understand that it is mostly in alignment with what I believe, not a global truth, but now that I’ve added this disclaimer I’m back to trying to climb the hill of global truth-y-ness.
Language is decentralized and ever changing. The western philosophy tradition is too obsessed with who was first or whatever instead of realizing that what we actually want is to pass the torch and teach the next generation our deeply held insights (without trapping them in minutiae and technical arguments, which can be important, but are also often a case of missing the forest for the trees).
I agree broadly.
I think the internalism/externalism debate qualifies as minutia in the sense that, it only makes sense as a question if you already accept the idea that knowledge is a justified true belief. We all agree that we want knowledge and it’s better to have knowledge than ignorance (or do we?), but only smaller number of people agree that knowledge is something like justified true belief. Among people who are debating JTB, there are Gettier problems and whatnot, and so they end up digging into well, what is justification anyway and how does that work? And then you can go further down and have various flavors of internalism/externalism and hybrid theories and controversies within them.
It’s fair though for someone else to look at the justification debate from outside and say, well, that’s all irrelevant because I have a pragmatic theory of knowledge, and JTB doesn’t apply, or I’m Nietzschean and “knowledge” is just the mask of will to power, etc. It would be really good if we could actually nail down all the big stuff before we really dig into the small stuff, but alas, that’s not how the hermeneutic circle works.
Hm interesting, I’d be interested to read what Hume said about it, and why he was wrong :)
Enquiry Concerning Human Understanding 7.1
As for why it’s wrong, so far no attempt at doing philosophy in clear language has succeeded, but maybe we just need to give it another three hundred years.
This is the ideal we should all strive for and why anonymity is so important. Also why I believed datalisp would not require much rhetoric… although I should have realized that Ron Rivest was not enough to change the way we do the web so who am I? Still. The reason mathematicians behave this way is because they know that theorems hold up in reality.
I always wondered why it’s called complexity. Big O and NP are about resource requirements. To my lay understanding, complexity relates more to chaos and nonlineariry. Sensitive dependence on initial conditions doesn’t mean much when you can initialize all your variables.
Total guess: because each complexity class corresponds to a set of problems which can be solved that cheaply, and the bigger classes contain more difficult / complicated problems.
From On the Computational Complexity of Algorithms (1965), the word “complexity” was chosen to refer to the difficulty of computing sequences of numbers (emphasis added):
There are many ways to measure complexity depending on the domain.
Kolmogorov complexity is about finding the smallest programa that produces a given string (the definition is a obviously more technical).
The VC dimension is a generalization of degrees of freedom, and a model with high VC dimension can classify more complex data sets.
The algorithmic complexity means that there is a hierarchy of difficulty, or complication, or solving problems, and problems that take less steps to solve are said to be simpler.
I like that VC dimension idea. To me a system is simple if an abstraction of it is much smaller than the thing itself. Think a crystal. Something is complex if it can’t be abstracted so easily. Think a mitochondrion. The same level of abstraction of a mitochondrion (patterns of atoms) is a sizable fraction of the thing itself. I don’t think that maps directly onto algorithmic complexity. I guess we just use the same word for somewhat different things.
Sounds (a lot!) like you’re talking about entropy
Absolutely, yes. Just didn’t have room in one paragraph to bring in such a huge concept.
From what you say, sounds like Kolmogorov complexity could be very interesting to - and a related concept, minimal descriptor length.