The Owl and The Electric Encyclopedia : Brian Cantwell Smith
The Owl and The Electric Encyclopedia : Brian Cantwell Smith
The Owl and The Electric Encyclopedia : Brian Cantwell Smith
Elsevier
Abstract
Smith, B.C., The owl and the electric encyclopedia, Artificial Intelligence 47 (1991)
251-288.
A review of "On the thresholds of knowledge", by D.B. Lenat and E.A. Feigenbaum.
I. Introduction
At the 1978 meeting of the Society for Philosophy and Psychology,1 some-
what to the audience's alarm, Zenon Pylyshyn introduced Terry Winograd by
claiming that his pioneering work on natural language processing had repre-
sented a "breakthrough in enthusiasm". Since those heady days, AI's hubris
has largely passed. Winograd himself has radically scaled back his estimate of
the field's potential (see, in particular [70, 72]), and most other practitioners
are at least more sober in their expectations. But not to worry. Unbridled
enthusiasm is alive and well, living in points South and West. 2
* Thanks to David Kirsh, Ron Chrisley, and an anonymous reviewer for helpful comments on an
earlier draft, and to Randy Davis for slowing down its original presentation.
1 Tufts University, Medford, MA.
2 Or at least it is alive. The original version of Lenat and Feigenbaum's paper (the one presented
at the Foundations of AI conference, in response to which this review was initially written) was
considerably more optimistic than the revision published here some four years later. For one thing,
their estimate of the project's scale has grown: whereas in 1987 they suggested the number of
things we know to be "many hundreds of thousands--perhaps a few million", that estimate has
now increased to "many millions (perhaps a few hundred million)". In addition, whereas their
original paper suggested that inference was essentially a non-problem (a sentiment still discernible
in their "Knowledge Is All There Is Hypothesis", p. 192), the project is now claimed to
incorporate at least "two dozen separate inference engines", with more on the way. Again, not
Enthusiasm takes many forms, even in AI. Most common is the belief that a
simple mechanism can accomplish extraordinary feats, if only given enough of
some resource (time, information, experience, computing power). Connection-
ist networks are a current favourite, but the tradition is time-honoured.
Feedback circuits, theorem provers, production systems, procedural repre-
sentations, meta-level architectures--all have had their day. In their present
paper, Lenat and Feigenbaum take up the enthusiast's cause, defending a new
flavour of "great expectation". They suggest that just a million frames,
massaged by already-understood control structures, could intelligently manifest
the sum total of human knowledge.
The paper exhibits another kind of zeal as well--more general than precipi-
tate faith in mechanism, and ultimately more damaging. This time the fervour
is methodological: an assumption that you can move directly from broad
intuition to detailed proposal, with essentially no need for intermediate
conceptual results. Let's look at this one first.
General insights, even profound ones, often have the superficial air of the
obvious. Suppose Newton, in an attempt to strike up a conversation at a
seventeenth century Cambridge pub, opened with the line that he had made an
astonishing discovery: that it takes energy to do work. It is hard to believe the
remark would have won him an extra pint. Newton is famous not for
enunciating glib doctrines, but for elaborating a comprehensive system of
details reaching from those encompassing insights all the way through to
only has the sophistication of their representation scheme increased, but (as predicted here in
Section 3) their representational conventions have developed from those of a simple frame system
towards something much more like full predicate calculus, complete with propositions, constraints,
set-theoretic models, etc. (Their words: "the need for more formality, for a more principled
representation language" was one of the "surprises that actually trying to build this immense KB
has engendered".) All these signs of increased sobriety are reassuring, of course, although, given
their ambition and eclecticism, one wonders whether the resulting complexity will be manageable.
More seriously, a conceptual shift has overtaken the project--more ramifying than these
relatively simpler issues of scale. At the 1988 CYC review meeting (in Palo Alto), Lenat claimed
that whereas he and Feigenbaum had initially taken their project as one of coding up everything in
the encyclopedia (hence the name "CYC"), they were now convinced that the real task was to
write down the complement of the encyclopedia: everything we know, but have never needed to
say. This is an astounding reversal. Dreyfus should feel vindicated [22], since this shift in focus
certainly strengthens any doubts about the ultimate adequacy of an allegiance to explicit repre-
sentation.
For all that, their optimism remains intact. They still believe that by 1994 they will approach the
crossover point where a system will pass the point of needing any further design or hands-on
implementation, and will from then on improve simply by reading and asking questions (implying,
I suppose, that AI's theoretical preliminaries will be concluded). Furthermore, they suggest that
this second "language-based learning" stage will in turn end by about the end of the decade, at
which point we will have a system "with human-level breadth and depth of knowledge". They
claim these things, furthermore, in spite of such telling admissions as the following, written in 1989:
"much of the 1984-89 work on CYC has been to get an adequate global ontology; i.e., has been
worrying about ways to represent knowledge; most of the 1990-94 work will be actually
representing knowledge, entering it into CYC."
The owl and the electric encyclopedia 253
2. Conceptual tunneling
L&F start with the Knowledge Principle, cited above: that you have to know
specific things about a domain to be competent at it. This insight is then used
to discriminate a set of levels of expertise: rudimentary, middle-level prac-
titioner, and expert. These levels are introduced with tautological generaliza-
tion: to get started, you need to know something; the more you know, the less
you need to search; once you know enough, additional knowledge will only
infrequently (though still occasionally) be useful. Little more is said, unfortu-
nately. And if the text is read closely, it shifts from the banal to the false.
Take the middle "practitioner" level. Without comment, L&F claim that
"today's expert s y s t e m s . . , include enough knowledge to reach the level of a
typical practitioner performing the task." This claim may be true in a few
limited, carefully chosen domains. In the sweeping context of the paper, on the
other hand, the remark implies something different: that moderate expertise is
achievable in arbitrary (if still specific) arenas. The latter claim simply isn't
true; we don't yet have expert system personnel managers, nurses, or private
detectives, and there are many, including some of the technology's protagonists
(see, e.g., [16]), who suspect we never will. So the reader ends up caught
between the plausibility of the narrow reading and the presumption of the
broad one.
Similarly, consider L&F's comments about getting started. They claim that
to solve a problem you need a minimum amount of knowledge in order to
"state [it] in a well-formed fashion". This is a major assumption, again
debatable. As students of AI are increasingly realizing (see [1, 2, 13, 21, 24, 39,
44, 48, 57-59, 67, 72] for a variety of such views), there's no reason to believe
that people formulate anything like all the problems they solve, even internal-
ly.3 Children happily charge around the world long before they acquire any
conceptual apparatus (such as the notions of "route" and "destination") with
which to formulate navigational problems. So too with language: fluent dis-
course is regularly conducted in complete absence of a single linguistic
concept--including "word" or "sentence", let alone Bosworth's "prose" or the
logician's "substitution salve veritate". Similarly, when you reach around and
retrieve your coffee cup from the side table, there is no reason---especially no a
priori reason--to believe that you formulate much of anything at all. Problems
stated in words have to be formulated, yes; but only because to "formulate"
means to state in words.
Here we see the beginning of the tunnel. If (i), in order to sidestep issues of
explicit formulation, and to avoid foundering in simplistic cases, the minimalist
threshold were generalized to "the solution of any complex task requires some
3 Suchman [67], for example, argues that conceptualizing action is often a retrospective
practice--useful for a variety of purposes (such as explanation), but not implicated in engendering
the action in the first place, especially in routine or everyday cases.
The owl and the electric encyclopedia 255
and John A d a m s both died (within an hour of each other) on July 4, 1826--50
years to the day after the signing of the Declaration of Independence they
co-authored. It's r u m o u r e d that the price of bananas and the suicide rate in
France tracked each other almost perfectly for years. The words " a b s t e m i o u s "
and "facetious" exhibit all five vowels in alphabetic order. Do we have an
explanation for these facts? No. So, should we look for additional similarities?
Probably not. A proper treatment of analogy requires a notion of relevant
similarity• Nor can their suggestion of entering "specialized versions" of
analogical reasoning in an n-dimensional matrix (according to "task domains,
• .user-modes . . . . . analogues with various epistemological statuses", etc.) be
.
This time we're given neither supporting details nor motivating intuition. On
the unwarranted assumption that parsing is solved, and if by "semantic
7 Actually, it might be false. Encoding control directions at the meta-level is another instance of
L&F's unswerving allegiance to explicit formulation. Unfortunately, however, as has been clear at
least since the days of Lewis Carroll, not everything can be represented explicitly; at some point a
system must ground out on a non-represented control regimen. Now L&F are presumably relying
on the computational conceit that any control structure whatsoever can be implemented explicitly,
by representing it in a program to be run by another, non-represented, underlying control regimen.
Proofs of such possibility, however, ignore resource bounds, real-time response, and the like. It is
not clear that we should blithely assume that our conceit will still hold under these more restrictive
constraints, especially in as pragmatic a setting as L&F imagine.
The owl and the electric encyclopedia 257
For astronomers, telescopes are tools, not subject matters; the theoretical notions in terms of
which we understand telescopes aren't the constitutive notions in terms of which we understand
what is seen through telescopes. AI, in contrast, is different: we exactly do claim that computational
notions, such as formal symbol manipulation, are applicable to the emergent intelligence we
computationally model.
Note in passing that although this reminiscent of Searle's [60] notions of strong and weak AI,
there is a crucial difference. In making such distinctions, Searle is distinguishing the relation
between a computational system and the mind: whether only their surface behaviours are claimed
similar (weak), or whether the way in which the computational process works is claimed to be the
way in which the mind works (strong). L&F, on the other hand, at least in this proposal, are
making no psychological claims; hence Searle's terms, strictly speaking, don't apply (although
L&F, if pressed, would presumably opt for the weak option). In contrast--and in complete
independence of psychology--they propose to build a computer system, and computer systems
258 B. C. Smith
to tell what they think; at best they seem to have in mind what would normally
be called hypothesis testing, not empirical inquiry. There's no admission that
there are external data and practices to be studied--that ours isn't an entirely
internalist, constructed game (they do say that "intelligence is still so poorly
understood that Nature still holds most of the important surprises", but shortly
thereafter dismiss all of deduction, induction, and so on as essentially solved).
In a similar vein, it's striking that genuine semantics isn't even mentioned--not
the question of "semantic representation" (i.e., how concepts and meanings
and the like are stored in the head), but the tougher question of how symbols
and representations relate to the world.
Alas, it looks as if what discouraged Winograd hasn't even been imagined by
the present authors.
Perhaps someone will object. L&F march to the pragmatist's drum, after all.
So is it unfair to hold them to clear theoretical standards? I think not. For one
thing, in a volume on the foundations of AI, explicating premises should be the
order of the day. Second, there is the matter of scale. This is a large project
they propose--all of consensus reality, 50 million dollars for the first stage, etc.
Untutored pragmatism loses force in the face of a task of this magnitude (you
can bridge a creek without a theory, but you won't put a satellite into orbit
around Neptune). Furthermore, citing the modesty of human accomplishment
("people aren't perfect at these things") won't let L&F off the hook, especially
when what is particularly modest is people's understanding of their own
intellectual prowess. Fortunately, we humans don't have to known much about
reasoning to be good at it--cf, the discussion of formulation, above. But L&F
can't piggy-back off our native competence, in creating a computational
version. Given that they're both starting from scratch, and committed to an
explicit-representation stance, they must understand what they're doing.
necessarily work in computational ways. I.e., they have to be "strong" about their own project:
otherwise they would be in the odd position of having no idea how to go about developing it. And
it is clear, in this sense, that they are "strong"; why else would they be discussing slots, frames, and
meta-rules?
So what of empiricism? As L&F suggest (this is their primary brief), the computational models
they recommend building should of course be tested. But as I suggest in the text, to claim that isn't
to claim that computers are the paradigmatic object of study. On the contrary, I would have
thought an appropriate "empirical" stance for computational AI would go something as follows:
one would (a) study intelligent behaviour, independent of form (biological, artifactual, whatever),
but known in advance (i.e., pre-theoretically) to be intelligent behaviour; (b) construct (strong)
computational models that manifest the essential principles that are presumed or hypothesized to
underlie that intelligence; and then (c) conduct experiments to determine those models' adequacy.
The point is that it is the first stage, not the third, that would normally be called "empirical".
The owl and the electric encyclopedia 259
Table 1
A dozen foundational questions (Boxes indicate agreement).
Logic L&F EC
1. Primary focus on explicit representation? yes yes ] no
2. Contextual (situated) content? no no [ yes
3. Meaning dependent on use? no no [ yes
4. Consistency mandated? yes no no
So we're brought right back to where we started: with that hidden middle
realm. Let's dig deeper, therefore, and uncover some of its inner structure. I'll
do this by locating L&F's position with respect to twelve foundational ques-
tions----questions that could be asked of any proposed reasoning or inference
system. Given that we lack a general theory of representation (not only those
of us in AI, but the wider intellectual community as well--a sobering fact,
since our systems rest on it so fundamentally), posing such questions is as good
an analytic strategy as any. Furthermore, these twelve will help reveal L&F's
representational assumptions.
The answers are summarized in Table 1. To convey a better sense of the
structure of the territory, I've flanked L&F's position with two other replies.
On the left is the position of traditional formal logic (the system studied by
philosophers and logicians, not "logic-based" theorem provers or logic pro-
gramming languages--both too ill-defined to be of much help here). On the
right is my own assessment of the minimum an AI system will require in order
to achieve anything like genuine intelligence. For discussion, I'll call it a notion
of "embedded computation" (EC).
One point needs emphasizing, before turning to specifics. Embedded compu-
tation is still an emerging perspective, not yet a technical proposal. That
doesn't make it sheer speculation, however, nor is it purely idiosyncratic. A
growing number of researchers are rallying around similar views--so many, in
fact, that one wonders whether something like it won't be the next AI stage,
beyond the "explicit knowledge" phase that L&F represent. 9 Nonetheless, I
9 In part, but not solely, because of its potential compatibility with connectionism. For specfic
discussion and results see, e.g., [1, 2, 12-15, 39, 48, 51, 55, 57-59, 66, 67, 72].
260 B. C. Smith
would be the first to admit that details remain to be worked out. But that's
exactly my point. I ' m contrasting it with L & F ' s position exactly in order to
highlight how far I believe we are from achieving their stated goals. For
purposes of the present argument, in other words, any claim that we don't yet
understand some aspect of the speculative EC v i e w - - w h a t nondiscrete compu-
tation would be like, say---counts for my position, and against L&F. m All that
matters is that there is some reason to believe that the issue or p h e n o m e n o n in
question is at least partially constitutive of intelligence. L & F are the ones with
the short-term timetable, after all, not I.
~"In fact, as it happens, it doesn't even matter whether you think the EC view is computational
at all. What's at stake here are the requisite underpinnings for intelligence; it is a secondary issue as
to whether those underpinnings can be computationally realized. As is happens, I believe that the
(real) notion of computation is so much wider than L&F's construal that I don't take the
discrepancy between genuine intelligence and their proposal as arguing against the very possibility
of a computational reconstruction. But that's a secondary point.
~ "Explicit" fragments of a representational scheme are usually the sort of thing one can
imagine removing--surgically, as it were--without disturbing the structural integrity or representa-
tional content of the remainder.
The owl and the electric encyclopedia 261
~3Some of the reasons will emerge in discussions of later questions, and are argued in [65]. For
analogous views, again see the exploratory systems of Rosenschein and Kaelbling [58], Brooks
[12], and Chapman and Agre [13], and the writings of Suchman [67], Cussins [15], Dreyfus [21],
and Smolensky [66].
L&F may of course reply that they do embrace implicit representation, in the form of compiled
code, neural nets, unparsed images. But this isn't strictly fair. By "the L&F position" I don't mean
the CYC system per se, in inevitably idiosyncratic detail, but rather the general organizing
principles they propose, the foundational position they occupy, the theoretical contributions they
make. I.e., it isn't sufficient to claim that the actual CYC software does involve this or that
embedded aspect, as, in many cases, I believe it must, in order to work at all--see, e.g., footnotes
16 and 29. Rather, my plaint is with overarching intellectual stance.
The owl and the electric encyclopedia 263
"1989" isn't absolute; when it appears in the New York Times, it usually refers
to the Gregorian calendar, not the Julian or Islamic one.
But language has no patent on contextual dependence. Computational
examples are equally common. When you button " Q U I T " o n the Macintosh file
menu, for example, the process that quits is the one that is running. The simple
e-mail address "JOHN", without an appended "@HOST" suffix, identifies the
account of whoever has that username on the machine from which the original
message is sent. If I set the alarm to ring at 5:00 p.m., it will ring at 5:00 p.m.
today. The machine language instruction "RETURN" returns control from the
current stack frame. If you button " E J E C T " , it ejects the floppy that is currently
in the drive.
Some quick comments on what contextual dependence isn't. First, none of
the cited examples should be read as implying that terms like "now", proper
names (or their internal analogues), machine instructions, and the like are
ambiguous. There's no reason (other than a stubborn retention of prior theory)
to treat the contextual dependence of reference as a matter of ambiguity.
Second, though related, the present issue of contextuality cross-cuts the
explicit/implicit distinction of question 1 ("here" and "now" are explicit
representations of contextually determined states, for example, whereas QUIT
and RETURN represent their contextually determined arguments implicitly, if at
all). Third, as with many semantical phenomena, representations typically have
(contextually dependent) contents; it's a category error to assume that those
contents have to be computed. Fourth--and even more important--contents
not only don't have to be, but typically can't be, determined solely by
inspecting the surrounding representational context. In the " Q U I T " case, for
example, the process to be killed is instantiated on the machine, but that
doesn't imply that it is represented. Similarly, in the e-mail case, the host
machine plays a role in determining the relevant addressee, but the egocentrici-
ty obtains in virtue of the machine's existence, not in virtue of any self-
reference. And in the use of Gregorian dates, or in the fact that "1:27 p.m."
(on my word processor, today) refers to 1:27 p.m. Pacific Standard Time, not
only is the relevant context not represented by the machine, it is not a fact
within the machine at all, having instead to do with where and when the
machine is located in the world. 14
Here's a way to say it: the sum total of facts relevant to the semantical
valuation of a system's representational structures (i.e., the relevant context)
will always outstrip the sum total of facts that that system represents (i.e., its
content).
14 I am intentionally ignoring scads of important distinctions for example, between the index-
icality of representational content (of which "here" and "now" are paradigmatic exemplars), and
the even more complex relation between what's in fact the case and how it's represented as being
(the latter is more Suchman's [67] concern). Sorting any of these things out would take us far
afield, but I hope just this much will show how rich a territory isn't explored by L&F's proposal.
264 B.C. Smith
W h a t , t h e n , o f t h e t h r e e p r o p o s a l s u n d e r r e v i e w ? T r a d i t i o n a l logic, again
p a r a d i g m a t i c a l l y , i g n o r e s c o n t e x t . ~5 T h e logical v i e w p o i n t , to use a p h r a s e o f
N a g e l ' s [50], e m b o d i e s the historical i m a g i n a t i o n ' s closest a p p r o x i m a t i o n yet to
a " v i e w f r o m n o w h e r e " . C o n t e x t u a l influence isn't c o m p l e t e l y g o n e , of
c o u r s e - - i t still p l a y s a r o l e in assigning p r o p e r t i e s a n d r e l a t i o n s to p r e d i c a t e s ,
for e x a m p l e , in selecting the " i n t e n d e d i n t e r p r e t a t i o n " . B u t as far as p o s s i b l e
logical t h e o r i e s i g n o r e t h a t i n e l i m i n a b l e r e s i d u e .
L & F are like the logicians; t h e y i g n o r e c o n t e x t too. A n d they have to.
C o n t e x t isn't a s i m p l e t h i n g - - s o m e t h i n g t h e y d o n ' t h a p p e n to t a l k a b o u t m u c h ,
b u t c o u l d a d d in, using t h e i r t o u t e d m e c h a n i s m for c o p i n g with r e p r e s e n t a t i o n -
al i n a d e q u a c y : n a m e l y , a d d i n g a n o t h e r slot. O n t h e c o n t r a r y , t h e i r insistence
that t h e i r " k n o w l e d g e b a s e " p r o j e c t can p r o c e e d w i t h o u t c o n c e r n as to t i m e ,
p l a c e , o r even k i n d o f use, is e s s e n t i a l l y an e n d o r s e m e n t of a - c o n t e x t u a l
representation.
F o r m y p a r t ( i . e . , f r o m t h e e m b e d d e d p e r s p e c t i v e ) , I t h i n k the s i t u a t e d
school is on to s o m e t h i n g . S o m e t h i n g i m p o r t a n t . E v e n at its m o s t o b j e c t i v e ,
i n t e l l i g e n c e s h o u l d be v i e w e d as a " v i e w f r o m s o m e w h e r e " [65]. T a k e an
a l m o s t limiting case: s u p p o s e y o u w e r e to ask L & F ' s s y s t e m h o w m a n y y e a r s it
w o u l d be b e f o r e the w o r l d ' s p o p u l a t i o n r e a c h e d 7 billion p e o p l e ? W i t h o u t a
c o n t e x t u a l g r o u n d i n g for the p r e s e n t t e n s e , it w o u l d h a v e no w a y to a n s w e r ,
b e c a u s e it w o u l d n ' t k n o w w h a t t i m e it w a s . 16
Logic L&F EC )
Question 3. Does meaning depend on use? I no no [ yes
~' Except the limiting case of intrasentential linguistic context necessary to determine by which
quantifier a variable is bound.
~ L&F might reply by claiming they could easily add the "'current date" to their system, and tie
in arithmetic procedures to accommodate "'within 10 years". My responses are three: (i) that to
treat the particular case in this ad hoc way won't generalize; (ii) that this repair practice falls
outside the very foundational assumptions on which the integrity of the rest of their representation-
al project is founded; and (iii) that the problem it attempts to solve absolutely permeates the entire
scope of human knowledge and intelligence.
t7 Careful distinctions between meaning and content aren't particularly common in AI, and I
don't mean to use the terms technically here, but the situation-theoretic use is instructive: the
content of a term or sentence is taken to be what a use of it refers to or is about (and may differ
from use to use), whereas the meaning is taken, at least approximately, to be a function from
context to content, and (therefore) to remain relatively constant. So the content of 'T', if you use
it, would be you; whereas it's meaning would (roughly) be ASPEAKER.SPEAKER. (This is
approximate in part because no assumption is made in situation theory that the relationship is
functional. See [5].)
The owl and the electric encyclopedia 265
It's one thing to say that the word "now", for example, or the state of an
internal clock, refers to the time of its use; that doesn't bring purpose or
function into the picture. But if you go on to say that the question of whether
such a use refers to a particular recent event can't be determined except in light
of the whole social pattern of activity in which it plays a role (which, as I'll
admit in a moment, I believe), then, from the point of view of developing a
(middle-realm) theory, you are taking on a much larger task.
To see, this, consider a series of examples. First, assume that the term
"bank" is ambiguous, as between financial institutions and edges of rivers.
Although neither L&F nor I have talked about ambiguity, that shouldn't be
read as implying that it is trivial. Still, let's assume it can somehow be handled.
Second, the word "today", as noted above, is also referentially plural--in the
sense of being usable to refer to many different things, depending (typically)
on the time of utterance. But "today" is indexical, not ambiguous (here's a
discriminating rule of thumb: ambiguity, but not indexicality, leads to different
dictionary entries18). As a consequence, its referential plurality (unlike that of
a truly ambiguous term) can't be resolved at the parsing or internalization
stage--so the indexicality will be inherited by the corresponding internal data
structure. Third, and different from both, is Winograd's example of "water"
[72, pp. 55-56], as used for example in the question "Is there any water in the
refrigerator?". It is this last kind of example I mean to describe as having
use-dependent meaning. In particular, depending on a whole variety of things,
the word in context could mean any of a million things: Is there literally any
H20 present in the metal-contained volume (such as in the cells of the
eggplant)? Is there any potable liquid? Has any condensation formed on the
walls? . . . . The point is that there is no reason to suppose these variations in
meaning could (or should) be systematically catalogued as properties of the
word (as was suggested for the referent of "today"). Instead, Winograd
suggests (and I agree) something more like this: the meaning of "water" is as
much determined by the meaning of the discourse as the meaning of the
discourse is determined by the meaning of "water".
Nothing in this view is incoherent, or even (at least necessarily) repellent to
systematic analysis: imagine that semantical interpretation (including the non-
effective semantical relations to the world) works in the cycle of a relaxation
algorithm, influenced by a variety of forces, including the actual participatory
involvement of the agent in the subject matter. Still, use-dependent meaning
does pose problems for a theorist. Take just two examples. First, it undermines
the very coherence of the notion of sound (or complete) inference; those
concepts make sense only if the semantic values of representational formulae
are conceptually independent of their role in reasoning. The problem isn't just
~ See the discussion of coordination conditions in [65] for one suggestion as to how to retain the
integrity of intentional analysis (better: integrity to the notion of intentionality) in the face of this
radical a theoretical revision.
20 To make this precise, you have to rule out cheats of encoding or implementation, of the
following sort: Suppose there is some holistic regularity ~ , a function of all kinds of contextual
aspects ~i, whereby complete intentional situations take on a meaning or significance M, and
suppose that Y( is in some way parameterized on the constituent words w~, w2, etc. (which of
course it will be---on even the most situated account it still matters what words you use). By a kind
of inverted currying process, this can be turned into a "bottom-up" analysis, based on a meaning of
the form A~g~, ~2 . . . . . fk(Y() for each word w~, so that when it is all put together M results, rather
in the way in which control irregularities in programming languages (like QUIT, THROW, and
ERROR) are handled in denotational semantics of programming languages by treating the
continuation as a component of the context. The problem with such deviousness is that it
essentially reduces compositionality to mean no more than that there exists some systematic overall
story.
21 Or, again, the meaning of the internal data structure or mental representation to which the
word "relentless" corresponds. Nothing I am saying here (or anywhere else in this review) hinges
on external properties of language. It's just simpler, pedagogically, to use familiar examples from
natural language than to construct what must inevitably be hypothetical internal cases. As pointed
out a few paragraphs back, of all the sorts of referential indefiniteness under review, only genuine
ambiguity can be resolved during the parsing phase.
The owl and the electric encyclopedia 267
What it does bring into question are the assumptions on which such a system
should be built, including for example the inferential viability of a system
without any access to the interpretation of its representational structures--
without, that is to say, participating in the subject matters about which it
reasons (one way in which to resolve the obvious difficulty raised by the
statement just made: that an agent know what is being said other than through
the vehicle of the saying). But I'll leave some of these speculations until a later
question.
For the time being, note merely that logic avoids this "meaning-depends-on-
use" possibility like the plague. In fact the "use = representation + inference"
aphorism reflects exactly the opposite theoretical bias: that representation
(hence meaning) is an independent module in the intentional whole.
Once again, L&F's position is similar: nothing in their paper suggests they
are prepared to make this radical a move. At one point they do acknowledge a
tremendous richness in lexical significance, but after claiming this is all
metaphor (which typically implies there is a firm "base case"), they go on to
assert, without argument, that "these layers of analogy and metaphor eventual-
ly 'bottom out' at physical--somatic--primitives: up, down, forward, back,
pain, cold, inside, seeing, sleeping, tasting, growing, containing, moving,
making noise, hearing, birth, death, strain, exhaustion . . . . . " It's not a list I
would want to have responsibility for completing.
More seriously, the integrity of L&F's project depends on avoiding use-
dependent meaning, for the simple reason that they don't intend to consider
use (their words: "you can never be sure in advance how the knowledge
already in the system is going to be used, or added to, in the future", which
they take as leading directly to the claim that it must be represented explicitly).
If we were to take the meaning-depends-on-use stance seriously, we would be
forced to conclude that nothing in their knowledge base means anything, since
no one has yet developed a theory of its use.
I.e., L&F can't say yes to this one; it would pull the rug out from under their
entire project.
In contrast (and as expected), the embedded view embraces the possibility.
Perhaps the best way to describe the tension is in terms of method. A liberal
logicist might admit that, in natural language, meaning is sometimes use-
dependent in the ways described, but he or she would go on to claim that
proper scientific method requires idealizing away from such recalcitrant messi-
ness. My response? That such idealization throws the baby out with the
bathwater. Scientific idealization is worth nothing if in the process it obliterates
the essential texture of what one hopes to understand. And it is simply my
experience that much of the structure of argument and discourse--even, the
raison d'Otre of rationality--involves negotiating in an intentional space where
meanings are left fluid by our linguistic and conceptual schemes, ready to be
grounded in experience.
268 B. C. Smith
t Logic L&F EC )
Question 4. Is consistency m a n d a t e d ? , yes I no no I
22There's one problem we can set aside. As it happens, the very notion of consistency is
vulnerable to the comments made in discussing question 3 (about use-dependent meaning). Like
soundness and completeness, consistency, at least as normally formulated, is founded on some
notion of semantic value independent of use, which an embedded view may not support (at least
not in all cases). This should at least render suspicious any claims of similarity between the two
positions. Still, since they stay well within the requisite conceptual limits, it's kosher to use
consistency to assess L&F on their own (not that that will resolve them of all their troubles).
The owl and the electric encyclopedia 269
theory [3, 5]. But these are at best a start. Logic famously ducks the question.
And informal attempts aren't promising: if my experience with the KRL project
can be taken as illustrative [10], the dominant result of any such attempt is to
be impressed with how seamlessly everything seems to relate to everything
else.
When all is said and done, in other words, it is unclear how L&F plan to
group, relate, and index their frames. They don't say, of course, and (in this
case) no implicit principles can be inferred. But the answer is going to matter a
lot--and not just in order to avoid inconsistency, but for a host of other
reasons as well, including search, control strategy, and driving their "analogy"
mechanism. Conclusion? That viable indexing (a daunting problem for any
project remotely like L&F's), though different from consistency, is every bit as
much in need as anything else of "middle-realm" analysis.
And as for consistency itself, we can summarize things as follows. Logic
depends on it. L&F retain it locally, but reject it globally, without proposing a
workable basis for their "partitioning" proposal. As for the embedded view (as
mentioned in footnote 22) the standard notion of consistency doesn't survive
its answer to question 3 (about use-dependent meaning). That doesn't mean,
however, that I won't have to replace it with something analogous. In
particular, I have no doubt that some notion of semantic viability, integrity,
respect for the fact that the world (not the representation) holds the weight--
something like that will be required for any palatable intentional system.
Important as contextual setting may be, no amount of "use", reasoning
processes, or consensual agreement can rescue a speaker from the potential of
being wrong. More seriously, I believe that what is required are global
coordination conditions--conditions that relate thinking, action, perception,
the passing of the world, etc., in something of an indissoluble whole. To say
more now, however---especially to assume that logic's notion can be incremen-
tally extended, for example by being locally proscribed--would be to engage in
tunneling of my own (but see [65]).
convince me) that mass nouns, plurals, or images should succumb to this
scheme in any straightforward way--or, to turn it upside down, to suppose
that, if an adequate solution were worked out within a frame-and-slot frame-
work, that the framework would contribute much to the essence of the
solution. Frames aren't rendered adequate, after all, by encoding other
representational schemes within t h e m . 23
Furthermore, one wonders whether any single representational framework--
roughly, a representation system with a single structural grammar and interpre-
tation scheme--will prove sufficient for all the different kinds of representation
an intelligent agent will need. Issues range from the tie-in to motor and
perceptual processing (early vision doesn't seem to be frame-like, for example;
is late vision?) to the seeming conflict between verbal, imagistic, and other
flavours of memory and imagination. You might view the difficulties of
describing familiar faces in words, or of drawing pictures of plots or reductio
arguments, as problems of externalizing a single, coherent, mentalese, but I
suspect they really indicate that genuine intelligence depends on multiple
representations, in spite of the obvious difficulties of cross-representational
translation.
Certainly our experience with external representations supports this conclu-
sion. Consider architecture: it is simply impossible not to be impressed with the
maze of blueprints, written specifications, diagrams, topological maps, pic-
tures, icons, annotations, etc., vital to any large construction project. And the
prospect of reducing them all to any single representational scheme (take your
choice) is daunting to the point of impossibility. Furthermore, there are
reasons for the range of type: information easily captured in one (the shape of
topological contours, relevant to the determination of building site, e.g.) would
be horrendously inefficient if rendered in another (say, English). 24
The same holds true of computation. It is virtually constitutive of competent
programming practice to be able to select (from a wide range of possibilities) a
particular representational scheme that best supports an efficient and consistent
implementation of desired behaviour. Imagine how restrictive it would be if,
instead of simply enumerating them in a list, a system had to record N user
names in an unordered conjunction of N 2 first-order claims:
~,3As indicated in their current c o m m e n t s , L & F have apparently expanded their representational
repertoire in recent years. Instead of relying solely on frames and slots, they now embrace, a m o n g
other things: blocks of compiled code, " u n p a r s e d " digitized images, and statistical neural net-
works. But the remarks made in this section still largely hold, primarily because no mention is
m a d e of how these different varieties are integrated into a coherent whole. The challenge--still
u n m e t , in my opinion--is to show how the " c o n t e n t s " contained in a diverse set of representational
schemes are semantically c o m m e n s u r a b l e , in such a way as to support a generalized, multi-modal
notion of inference, perception, j u d g m e n t , action. For some initial work in this direction see [6] for
a general introduction, and [7] for technical details.
24 Different representational types also differ in their informational prerequisites. Pictures and
graphs, for example, can't depict as little information as can English t e x t - - i m a g i n e trying to draw a
picture of "either two adults or half a dozen children".
The owl and the electric encyclopedia 271
-~"The phrase is from various of John Perry's lectures given at CSLI during 1986-88.
The owl and the electric encyclopedia 273
erties. Some modern roboticists, for example, argue that action results primari-
ly from the dynamical properties of the body; the representational burden to
be shouldered by the " m i n d " , as it were, may consist only of adjustments or
tunings to those non-representational capacities (see, e.g., [55, 56]). Rhythm
may similarly as much be exhibited as encoded in the intelligent response to
music. Or even take a distilled example from LISP: when a system responds
with the numeral "3" to the query "(LENGTH '(AS C))", it does so by
interacting with non-representational facts, since (if implemented in the ordi-
nary way) the list '(A S C) will have a cardinality, but not one that is repre-
sented.
Distinguishing representational from non-representational in any careful way
will require a better theory of representation than any we yet have. 2~ Given
such a story, it will bcome possible to inquire about the extent to which
intelligence requires access to these non-formulated (non-formulable?) aspects
of the subject matter. Although it's premature to take a definite stand, my
initial sense is that there is every reason to suppose (at least in the human case)
that it does. Introspection, common sense, and even considerations of efficient
evolutionary design would all suggest that inferential mechanisms should avail
themselves of any relevant available resources, whether those have arisen
through representational channels, or otherwise. If this is true, then it follows
that a system lacking any of those other channels--a system without the right
kind of embodiment, for e x a m p l e - - w o n ' t be able to reason in the same way we
do. And so much the worse, I'd be willing to bet, for it.
How do our three players stand on this issue? I take it as obvious that L&F
require what logic assumes: that representation has to capture all that matters,
for the simple reason that there isn't anything else around. For L&F, in other
words, facts that can't be described might as well not be true, whether about
fire, sleep, internal thrashing, or the trials of committee work. They are forced
to operate under a maxim of "inexpressible---)irrelevant".
In contrast, as I've already indicated, I take seriously the fact that we are
beaten up by the w o r l d - - a n d not only in intentional ways. I see no reason to
assume that the net result of our structural coupling to our environment----even
that part of that coupling salient to intelligent deliberation--is exhausted by its
representational record. And if that is so, then it seems overwhelmingly likely
that the full structure of intelligence will rely on that residue of maturation and
embodiment. So I'll claim no less for an embedded computer.
Here's a way to put it. L&F believe that intelligence can rest entirely on the
meaning of representations, without any need for correlated, non-representa-
tional experience. On the other hand, L&F also imagine their system starting to
read and distill things on its own. What will happen, however, if the writers
25 Though some requirements can be laid down: such as that any such theory have enough teeth
so that not everything is representational. That would be vacuous.
The owl and the electric encyclopedia 275
Logic L&F EC )
Question 8. Are reasoning and inference central? !~_~_~
yes yes]
,_9All the remarks made in footnote 16 apply here: it won't do to reply that L & F could build a
model of right and left inside the system, or even attach a camera, since that would fall outside
their stated program for representing the world. I too (i.e., on the e m b e d d e d view) would attach a
camera, but I want a theory of what it is to attach a camera, and of some other things as well such
as how to integrate the resulting images with conceptual representations, and how envisionment
works, and how this all relates to the existence of "internal" sensors and effectors, and how it ties
to action, and so on and so f o r t h - - u n t i l I get a theory that, as opposed to slots-and-frames, really
does do justice to full-scale participation in the world. Cameras, in short, are just the tip of a very
large iceberg.
30 To imagine the converse, furthermore, would be approximately equivalent to the proposal
that p r o g r a m m i n g languages do away with procedures and procedure calls, in favour of the
advance storage of the s u m total of all potentially relevant stack frames, so that any desired answer
could merely be "'read off", without having to do any work. This is no m o r e plausible a route to
intelligence than to satisfactory computation m o r e generally. A n d it would raise daunting issues of
indexing and retrieval--a subject for which, as discussed u n d e r question 4 (on consistency), there is
no reason to suppose that L & F have any unique solution.
276 B.C. Smith
is only the beginning. " I n f e r e n c e " includes not only deduction, but induction,
abduction, inference to the best explanation, concept formation, hypothesis
testing--even sheer speculation and creative flights of fancy. It can hardly be
argued that some such semantically coordinated processing 3~ is essential to
intelligence.
It shouldn't be surprising, then, that inference is the one issue on which all
three positions coincide--logic, L&F, and EC. But superficial agreement
doesn't imply deep uniformity. There are questions, in each case, as to what
that c o m m i t m e n t means.
To see this, note that any inference regimen must answer to at least two
demands. The first is famous: though mechanically defined on the form or
structure of the representational tngredlentsi" inference must m a k e semantic
sense (that's what makes it inference, rather than ad hoc symbol mongering).
There simply must be some semantic justification, that is to s a y - - s o m e way to
see how the " f o r m a l " symbol manipulation coordinates with semantic value or
interpretation. Second, there is a question of finitude. One cannot forget, when
adverting to inference as the mechanism whereby a finite stock of representa-
tions can generate an indefinite array of behaviour, that the inference mecha-
nism iself must be compact (and hence productive). The deep insight, that is to
say, is not that reasoning allows a limited stock of information to generate an
unlimited supply of answers, but that a synchronously finite system can
manifest diachronically indefinite semantic behaviour.
Logic, of course, supplies a clear answer to the first demand (in its notion of
soundness), but responds only partially to the second (hence the dashed lines
around its positive answer). A collection of inferential schemata are p r o v i d e d - -
each demonstrably truth-preserving (the first requirement), and each appli-
cable to an indefinite set of sentences (the second). But, as AI knows so well,
something is still missing: the higher-level strategies and organizational princi-
ples necessary to knit these atomic steps together into an appropriate rational
pattern. 33 Being able to reason, that is to say, isn't just the ability to take the
right atomic steps; it means knowing how to think in the l a r g e - - h o w to argue,
how to figure things out, how to think creatively about the world. Traditional
logic, of course, doesn't address these questions. N o r - - a n d this is the im-
portant p o i n t - - i s there any a priori reason to believe that that larger inferential
d e m a n d can be fully met within the confines of logic's peculiar formal and
semantic conventions.
36One thing it won't be able to shun, presumably, will be its users. See footnote 37.
The owl and the electric encyclopedia 279
~ Reducibility, as the term is normally used in the philosophy of science, is a relation between
theories; one theory is reducible to another if, very roughly, its predicates and claims can be
translated into those of another. In contrast, the term supervenience is used to relate phenomena
themselves; thus the strength of a beam would be said to supervene on the chemical bonds in the
constitutive wood. The two relations are distinguished because people have realized that, some-
what contrary to untutored intuition, supervenience doesn't necessarily imply reducibility (see [27,
33, 40, 41]).
a~'As opposed to the "negative" reading: namely, that a formal computational process proceed
independently of the semantics. That the two readings are conceptually distinct is obvious: that
they get at different things is argued in [65].
a' I am not asking the reader to agree with this statement, without more explanation--just to
admit that it is conceptually coherent.
The owl and the electric encyclopedia 281
physical realization casts its shadow. Consider one other example: the notion
of locality that separates doubly-linked lists from more common singly-linked
ones, or that distinguishes object-oriented from function-based programming
languages. Locality, fundamentally, is a physical notion, having to do with
genuine metric proximity. The question is whether the computational use is
just a metaphor, or whether the "local access" that a pointer can provide into
an array is metaphysically dependent on the locality of the underlying physics.
As won't surprise anyone, the embedded viewpoint endorses the latter pos-
sibility.
42 I am not suggesting that physical involvement with the subject matter is sufficient for original
intentionality; that's obviously not true. And I don't mean, either, to imply the strict converse: that
anything like simple physical connection is necessary, since we can obviously genuinely refer to
things from which we are physically disconnected in a variety of ways---by distance, from other
galaxies; by fact, from Santa Claus; by possibility, from a round square; by type, from the number
2. Still, I am hardly alone in thinking that some kind o f causal connectivity is at least a constituent
part of the proper referential story. See e.g. Kripke [43], Dretske [19], and Fodor [28].
282 B. C. Smith
The final question has to do with the relation between the representational
capacities of a system under investigation, and the typically much more
sophisticated capacities of its designer or theorist. I'll get at this somewhat
indirectly, through what I'll call the aspectual nature of representation.
It is generally true that if X represents Y, then there is a question of how it
represents it---or, to put it another way, of how it represents it as being. The
The owl and the electric encyclopedia 283
two phrases "The Big Apple" and "the hub of the universe" can both be used
to represent New York, but the latter represents it as something that the
former does not. Similarly, "the MX missile" and Reagan's "the Peacemaker"•
The "represent as" idiom is telling. If we hear that someone knew her
brother was a scoundrel, but in public represented him as a model citizen, then
it is safe for us to assume that she possessed the representational capacity to
represent him in at least these two ways. More seriously--this is where things
can get tricky--we, qua theorists, who characterize her, qua subject, know
what it is to say "as a scoundrel", or "as a citizen". We know because we too
can represent things as scoundrels, as citizens, and as a myriad other things as
well. And we assume, in this example, that our conceptual scheme and her
conceptual scheme overlap, so that we can get at the world in the way that she
does. So long as they overlap, trouble won't arise. 43
Computers, however, generally don't possess anything remotely like our
• • 44
discriminatory capacmes, and as a result, it is a very substantial question for
us to know how (from their point of view) they are representing the world as
being. For example (and this partly explains McDermott's [49] worries about
the wishful use of names), the fact that we use English words to name a
computer system's representational structures doesn't imply that the resulting
structure represents the world for the computer in the same way as that name
represents it for us. Even if you could argue that a KRYPTON node labeled
$DETENTE genuinely represented detente, it doesn't follow that it represents it
as what we would call detente. It is hard to know how it does represent it as
being (for the computer), of course, especially without knowing more about
the rest of its representational structures. 45 But one thing seems likely:
$DETENTE will mean less for the computer than "detente" means for us.
I suspect that the lure of L&F's project depends in part on their ignoring
"as" questions, and failing to distinguish theorists' and agents' conceptual
schemes• Or at least this can be said: that they are explicitly committed to not
making a distinction between the two. In fact quite the opposite is presumably
their aim: what they want, of the system they propose to build, is something
43 In logic, this required overlap of registration scheme turns up in the famous mandate that a
metalanguage used to express a truth theory must contain the predicate of the (object) language
under investigation (Tarski's convention T). Overlap of registration scheme, however, is at least
potentially a much more complex issue than one of simple language subsumption.
44 Obviously they are simpler, but the differences are probably more interesting than that. The
individuation criteria for computational processes are wildly different from those for people, and,
even if AI were to succeed up to if not beyond its wildest dreams, notions like "death" will
probably mean something rather different to machines than to us. Murder, for example, might only
be a misdemeanor in a society with reliable daily backups.
45 It would also be hard (impossible, in fact) for us to say, exactly, what representing something
as detente would mean for u s - - b u t for a very different reason. At least on a view such as that of
Cussins [15], with which I am sympathetic, our understanding of the concept "detente" is not itself
a conceptual thing, and therefore can't necessarily be captured in words (i.e., concepts aren't
conceptually constituted). Cf. the discussion of formulation in Section 2.
284 B. C. Smith
that we can interact with, in our own language (English), in order to learn or
shore up or extend our own understanding of the world. In order for such
interaction to w o r k - - a n d it is entirely representational interaction, of course--
the two conceptual schemes will have to be commensurable, on pain of
foundering on miscommunication.
Here, though, is the problem. I assume (and would be prepared to argue)
that an agent (human or machine) can only carry on an intelligent conversation
using words that represent the world in ways that are part of that agent's
representational prowess. For an example, consider the plight of a spy. No
matter how carefully you try to train such a person to use a term of high-energy
physics, or the language of international diplomacy, subsequent conversations
with genuine experts are almost sure to be awkward and "unintelligent" (and
the spy therefore caught!) unless the spy can genuinely come to register the
world in the way that competent users of that word represent the world as
being.
It follows, then, that L&F's project depends for its success on the conso-
nance of its and our conceptual schemes. Given that, the natural question to
ask is whether the sketch they present of its construction will give it that
capacity. Personally, I doubt it, because, like Evans [25], I am convinced that
most common words take their aspectual nature not only from their "hook-up"
to other words, but from their direct experiential grounding in what they are
about. And, as many of the earlier questions have indicated, L&F quite clearly
don't intend to give their system that kind of anchoring.
So once again we end up with the standard pattern. Neither traditional logic
nor L&F take up such issues, presuming instead on what may be an un-
warranted belief of similarity. It is characteristic of the embedded view to take
the opposite tack; I don't think we'll ever escape from surprises and charges of
brittleness until we take seriously the fact that our systems represent the world
differently from us.
5. Conclusion
To take representing the world seriously (it's world representation, after all,
not knowledge representation, that matters for AI) is to embrace a vast space
of possibilities. You quickly realize that the intellectual tools developed over
the last 100 years (primarily in aid of setting logic and meta-mathematics on a
firm foundation) will be about as much preparation as a good wheel-barrow
would be for a 24-hour dash across Europe. The barrow shouldn't be knocked;
there are good ideas there--such as using a wheel. It's just that a little more is
required.
So there you have it. L&F claim that constructed intelligence is "within our
grasp". I think it's far away. They view representation as explicit--as a matter
of just writing things down. I take it as an inexorably tacit, contextual,
embodied faculty, that enables a participatory system to stand in relation to
what is distal, in a way that it must constantly coordinate with its underlying
physical actions. L&F think you can tunnel directly from generic insight to
system specification. I feel we're like medieval astrologers, gropings towards
our (collective?) Newton, in a stumbling attempt to flesh out the theoretical
middle realm. There is, though, one thing on which we do agree: we're both
enthusiastic. It's just that I'm enthusiastic about the work that lies ahead; L&F
seem enthusiastic that it won't be needed.
Why?--why this difference? Of many reasons, one goes deep. From my
286 B. C. Smith
References
~ Taken from a letter Yeats wrote to a friend shortly before his death. Dreyfus cites the passage
at the conclusion of the introduction to the revised edition of his What Computers Can't Do [21,
p. 66]; it has also been popularized on a poster available from Cody's Books in Berkeley.
The owl and the electric encyclopedia 287
[19] F. Dretske, Knowledge and the Flow of Information (MIT Press, Cambridge, MA, 1981).
[20] F. Dretske, Explaining Behavior: Reasons in a World of Causes (MIT Press/Bradford Books,
Cambridge, MA, 1988).
[21] H.L. Dreyfus, What Computers Can't Do: The Limits of Artificial Intelligence (Harper Row,
New York, rev. ed., 1979).
[22] H.L. Dreyfus, From micro-worlds to knowledge representation: AI at an impasse, in: J.
Haugeland, ed., Mind Design: Philosophy, Psychology, Artificial Intelligence (MIT Press,
Cambridge, MA, 1981) 161-205.
[23] H.L. Dreyfus, ed., Husserl, lntentionality, and Cognitive Science (MIT Press, Cambridge,
MA, 1982).
[24] H.L. Dreyfus and S.E. Dreyfus, Mind over Machine: The Power of Human Intuition and
Expertise in the Era of the Computer (Macmillan/Free Press, New York, 1985).
[25] G. Evans, The Varieties of Reference (Oxford University Press, Oxford, 1982).
[26] R. Fagin and J.Y. Halpern, Belief, awareness, and limited reasoning, in: Proceedings
HCAI-85, Los Angeles, CA (1985) 491-501.
[27] J.A. Fodor, Special sciences (or: the disunity of science as a working hypothesis), Synthese 28
(1974) 97-115; reprinted in: N. Block, ed., Readings in the Philosophy of Psychology
(Harvard University Press, Cambridge, MA, 1980) 120-133.
[28] J.A. Fodor, Psychosemantics (MIT Press/Bradford Books, Cambridge, MA, 1987).
[29] D. Gentner and D. Gentner, Flowing waters or teeming crowds: Mental models of electricity,
in: D. Gentner and A. Stevens, eds., Mental Models (Erlbaum, Hillsdale, NJ, 1983).
[30] B.J. Grosz and C.L. Sidner, Attention, intentions, and the structure of discourse, Comput.
Linguistics 12 (3) (1986) 175-204.
[31] J. Haugeland, Semantic engines: introduction to mind design, in: J. Haugeland, ed., Mind
Design: Philosophy, Psychology, Artificial Intelligence (MIT Press, Cambridge, MA, 1981)
1-34.
[32] J. Haugeland, ed., Mind Design: Philosophy, Psychology, Artificial Intelligence (MIT Press,
Cambridge, MA, 1981).
[33] J. Haugeland, Weak supervenienee, Am. Philos. Q. 19 (1) (1982) 93-103.
[34] EJ. Hayes, The second naive physics manifesto, in: J.R. Hobbs and R.C. Moore, eds.,
Formal Theories of the Commonsense World (Ablex, Norwood, NJ, 1985) 1-36.
[35] P.J. Hayes, Naive physics I: ontology for liquids, in: J.R. Hobbs and R.C. Moore, eds.,
Formal Theories of the Commonsense World (Ablex, Norwood, NJ, 1985) 71-107.
[36] J.R. Hobbs and R.C. Moore, eds., Formal Theories of the Commonsense World (Ablex,
Norwood, NJ, 1985).
[37] J.R. Hobbs et al., Commonsense summer: final report, Tech. Rept. CSLI-85-35, Stanford
University, Stanford, CA (1985).
[38] D.J. Israel, What's wrong with non-monotonic logic?, in: Proceedings AAAI-80, Stanford,
CA (1980).
[39] L. Kaelbling, An architecture for intelligent reactive systems, in: M.P. Georgeff and A.L.
Lansky, eds., Reasoning about Action and Plans: Proceedings of the 1986 Workshop (Morgan
Kaufmann, San Mateo, CA, 1987) 395-410.
[40] J. Kim, Supervenience and nomological incommensurables, Am. Philos. Q. 15 (1978)
149-156.
[41] J. Kim, Causality, identity, and supervenience in the mind-body problem, Midwest Stud.
Philos. 4 (1979) 31-49.
[42] D. Kirsh, When is information explicitly represented?, in: P. Hanson, ed., Information,
Language, and Cognition, Vancouver Studies in Cognitive Science 1 (University of British
Columbia Press, Vancouver, BC, 1990) 340-365.
[43] S.A. Kripke, Naming and Necessity (Harvard University Press, Cambridge, MA, 1980).
[44] J. Lave, Cognition in Practice: Mind, Mathematics, and Culture in Everyday Life (Cambridge
University Press, Cambridge, 1988).
[45] H.J. Levesque, A logic of implicit and explicit belief, in: Proceedings AAAI-84, Austin, TX
(1984) 198-202.
[46] D.M. Levy, D.C. Brotsky and K.R. Olson, Formalizing the figural, in: Proceedings ACM
Conference on Document Processing Systems, Santa Fe, NM (1988) 145-151.
288 B. C. Smith
[47] J. McCarthy and P.J. Hayes, Some philosophical problems from the standpoint of artificial
intelligence, in: B. Meltzer and D. Michie, eds., Machine Intelligence 4 (American Elsevier,
New York, 1969) 463-502.
[48] J.L. McClelland, D.E. Rumelhart and the PDP Research Group, eds., Parallel Distributed
Processing: Explorations in the Microstructure of Cognition 2: Psychological and Biological
Models (MIT Press/Bradford Books, Cambridge, MA, 1986).
[49] D.V. McDermott, Artificial intelligence meets natural stupidity, in: J. Haugeland, ed., Mind
Design: Philosophy, Psychology, Artificial Intelligence (MIT Press, Cambridge, MA, 1981)
143-160.
[50] T. Nagel, The View from Nowhere (Oxford University Press, Oxford, 1986).
[51] D.A. Norman, The Psychology of Everyday Things (Basic Books, New York, 1988).
[52] A. Ortony, ed., Metaphor and Thought (Cambridge University Press, Cambridge, 1979).
[53] J. Perry, The problem of the essential indexical, NOUS 13 (1979) 3-21.
[54] J. Perry and D. Israel, What is information.'?, in: P. Hanson, ed., Information, Language, and
Cognition, Vancouver Studies in Cognitive Science 1 (University of British Columbia Press,
Vancouver, BC, 1990) 1-19.
[55] M.H. Raibert, Legged robots, Commun. ACM 29 (6) (1986) 499-514.
[56] M.H. Raibert and I.E. Sutherland, Machines that walk, Sci. Am. 248 (1) (1983) 44-53.
[57] S. Rosenschein, Formal theories of knowledge in AI and robotics, New Generation Comput. 3
(4) (1985).
[58] S. Rosenschein and L. Kaelbling, The synthesis of digital machines with provable epistemic
properties, in: Proceedings Workshop on Theoretical Aspects of Reasoning about Knowledge
(Morgan Kaufmann, Los Altos, CA, 1986); also: Tech. Rept. CSLI-87-83, Stanford Universi-
ty, Stanford, CA (1987).
[59] D.E. Rumelhart, J.L. McClelland and the PDP Research Group, eds., Parallel Distributed
Processing: Explorations in the Microstructure of Cognition 1: Foundations (MIT Press/
Bradford Books, Cambridge, MA, 1986).
[60] J.R. Searle, Minds, brains, and programs, Behav. Brain Sci. 3 (1980) 417-424; reprinted in:
J. Haugeland, ed., Mind Design: Philosophy, Psychology, Artificial Intelligence (MIT Press,
Cambridge, MA, 1981) 282-306.
[61] J.R. Searle, Minds, Brains, and Science (Harvard University Press, Cambridge, MA, 1984).
[62] B.C. Smith, Prologue to "Reflection and semantics in a procedural language", in: R.J.
Brachman and H.J. Levesque, eds., Readings in Knowledge Representation (Morgan Kauf-
mann, Los Altos, CA, 1985) 31-39.
[63] B.C. Smith, Varieties of self-reference, in: J.Y. Halpern, ed., TheoreticalAspects of Reasoning
about Knowledge: Proceedings of the 1986 Conference (Morgan Kaufmann, Los Altos, CA,
1986).
[64] B.C. Smith, The semantics of clocks, in: J. Fetzer, ed., Aspects of Artificial Intelligence
(Kluwer Academic Publishers, Boston, MA, 1988) 3-31.
[65] B.C. Smith, A View from Somewhere: An Essay on the Foundations of Computation and
lntentionality (MIT Press/Bradford Books, Cambridge, MA, to appear).
[66] P. Smolensky, On the proper treatment of connectionism, Behav. Brain Sci. 11 (1988) 1-74.
[67] L.A. Suchman, Plans and Situated Actions (Cambridge University Press, Cambridge, 1986).
[68] A. Tarski, The concept of truth in formalized languages, in: A. Tarski, ed., Logic, Semantics,
Metamathematics (Clarendon Press, Oxford, 1956) 152-197.
[69] T. Winograd, Moving the semantic fulcrum, Tech. Rept. CSLI-84-77, Stanford University,
Stanford, CA (1984).
[70] T. Winograd, Thinking machines: Can there be? Are we?, Tech. Rept. CSL1-87-100, Stanford
University, Stanford, CA (1987).
[71] T. Winograd, Three responses to situation theory, Tech. Rept. CSL1-87-106, Stanford
University, Stanford, CA (1987).
[72] T. Winograd and F. Flores, Understanding Computers and Cognition: A New Foundation for
Design (Ablex, Norwood, NJ, 1986).