AGAINST SEMANTIC FORMALISM


Arnold vander Nat, Fall 1999

Semantic Formalism. This paper is a foundational inquiry into the nature of word meaning. Our goal is a modest one: to prove that a certain thesis regarding word meaning is false. Admittedly, the thesis is judged implausible in many quarters. But it is nevertheless instructive to consider it. The thesis addressed is a fundamental one, and it is difficult not to beg the question in arguing against it. Our result has an application to the areas of language theory, the foundations of mathematics, artificial intelligence theory, and theoretical systems generally, and it warrants caution whenever one encounters positions that even seem to echo this false position.

The thesis at issue finds a home in the philosophical movement of structuralism, and in particular, in the structuralist theory of language, as formulated by Saussure, Lévi-Strauss, and more recent authors. On the structuralist view, a language is a self-contained, autonomous, formal system such that all of its elements (phonemes, words, phrases, sentences) are completely characterized by their relations to all the other elements of the system. Such relations are described as constituting a system of 'oppositions' among all the terms of the language. We will explore the nature of these relations in greater detail below. According to this view, the meaning of words can be analyzed only in terms of the internal relations that characterize a language. (1) Structuralism is explicit in its rejection that anything outside of language can be the basis of what words mean. (2)

But structuralism is not the only position that embraces the false thesis. Formalism in mathematics proposes that mathematical terms (such as "number," "successor," "plus," "zero," and so on) have a meaning that is determined in its entirety by the syntax and axioms of the mathematical system that incorporates them. Likewise, formalism in scientific theory proposes that theoretical terms (such as "electron," "orbital," "photon," "quark," "charm," and so on) have a meaning that is determined solely by the syntax and axioms of the scientific theory that incorporates them. More generally, formalisms about certain language fragments (including the ones just mentioned, as well as "language games" views) propose that the meaning of terms of a language fragment is completely determined by the syntax and special rules of the fragment that incorporates them. All such formalisms are false.

We take semantic formalism to be the following position:

(SF) The meaning that the words of a language have derives solely from the internal relations that all the words of the language have to all the other words of the language.

Semantic formalism is a strong thesis regarding what word-meaning is inherently. A language is defined in its entirety by its words and the relations (described below) that are specified in the language among the words: given a specification of relations in the language, all words just have a meaning, a meaning that exists in virtue of these relations. (3) One important implication of this thesis is that given two different instances of the same system of word relations, the meaning of the words must be the same. Were English to be transported in its entirety to a different time and place, transported with all its words and all the internal relations that all the words have to all other words, the words would mean then what they mean now. Where the relations exist, the meaning must exist. Seen in this light, semantic formalism takes the following form: (4)

(SF1) Two languages that have the same system of internal relations among all their words have the same meaning for all their words, that is,

(SF1b) for any possible situations w1 and w2 with languages L1 and L2 respectively, if L1 and L2 have the same system of internal relations among all their words, then the meaning that the words of L1 have in w1 is the same as the meaning that the words of L2 have in w2.

By implication, semantic formalism is also a thesis about what word-meaning is not. The empirical circumstances that surround a language do not determine what its words mean: the meaning of words is not determined by the thoughts, concepts, or intentions of the speakers, nor by their experience, nor by things that they find in the world. Only the language itself determines what words mean. Moreover, semantic formalism is a thesis that applies to languages generally. There is no reference to the size of the vocabulary set, the types of grammatical expressions, the specific rules of grammar, the particular means of speech production, or the particular linguistic circumstances of the speakers, as being the conditions under which a certain kind of meaning arises.

Words that have a definition have a meaning that consists of relations to other words, as specified in the definition. But, these are not the kind of internal relations that the semantic formalist has in mind. Most words do not have a verbal definition. They do, however, have an enormous number of relations to other words. Each assertable sentence determines some kind of relation among all the words in it, (e.g., the relation of co-position in an assertable sentence), and two such sentences that share a word form some kind of relations among all the words of both sentences. There is thus an enormous network of relations among all the words of the language, and one may plausibly consider that this network gives every word some kind of meaning. (5)

One can also consider the formalist thesis with respect to, what may be called, logical words, or non-descriptive words, such as "every," "some," "not," "if," "and," "or," and others. The meaning of these words seems to be of a kind that can only be described by reference to global features of the language. For example, the very form "if not everything is A, then something is not A" allows sentences with such a form to be asserted. One may plausibly consider, then, that the non-descriptive words of a language are meaningful in virtue of the formal structure of the language.

But, whatever the merits of semantic formalism, the thesis is nonetheless false, as the argument that we present below demonstrates. We do not ourselves argue for a positive account of what word-meaning is, except by implication that, since semantic formalism is false, an account of word-meaning must appeal to some source of meaning that is external to language. The argument against semantic formalism (presented below in detail) goes something like the following:

When I say something like "I see elephants," I am talking about elephants and not numbers. But, if semantic formalism is true, then, when I say "I see elephants," I am talking about numbers. So, semantic formalism is false.

Languages. For our purposes, it will be sufficient to present a very broad characterization of natural languages. We take a language to be a structured set of expressions (6) consisting of (1) a vocabulary set, (2) a set of well-formed expressions, (3) a set of asserted sentences, (4) a set of assertable sentences, and (5) a meaning relation for its words. This characterization may not be one that a linguist would give, but it is accurate and presents for our scrutiny those feature that are important in this discussion. We consider these features in turn.

(1) A language has a number of specified words, that form its vocabulary set, and that are taken to be well-formed expressions. Words are finite strings of physical tokens (such as letters). Most words may be classified as descriptive, with an ordinary meaning, while the others are non-descriptive, with a formal meaning. We discuss this matter below.

(2) The other well-formed expressions of the language are phrases and sentences. These are finite strings of words that conform to certain rules of well-formedness, in the sense that certain complex expressions are well-formed in virtue of the well-formedness of their component parts according to such a rule. (7) In virtue of the well-formedness of individual words, phrases are well-formed, and in virtue of the well-formedness of phrases, sentences are well-formed. The rules of well-formedness that characterize a language can be stated in some language, but a language does not itself need to have sentences that state the rules of well-formedness that characterize it, and the simple languages that we consider do not. The set, then, of expressions that constitutes a language is the set of all expressions that can be formed from the vocabulary set, in conformity with the rules of well-formedness that characterize the language. We note that this set contains as a proper subset all the well-formed expressions that have actually been formed by the users of the language. (8)

(3) A language has certain sentences that have a distinguished status of being publicly asserted by the speakers of the language and that form the assertion set of the language. In this sense languages are indicative. Speakers of a language take certain sentences to be true, as perhaps, sentences such as "every elephant is an animal," "every American citizen has constitutional rights," "the Moon orbits the Earth," and so on. So as not to beg a question with regard to semantic formalism, in characterizing a language as having an assertion set we do not appeal to any special, empirical circumstance in virtue of which sentences are asserted. Everyone can agree that certain sentences are asserted in a language. (9)

(4) A language has certain sentences that have the status of being assertable and that form the assertability set of the language. Besides being indicative, languages are also inferential in the sense that certain sentences may be asserted in the presence of other asserted, or assertable, sentences according to rules of inference. (10) For example, a language may be such that if some sentences S and "if Sthen T" are assertable, then the sentence T is assertable. Such rules of inference can be stated in some language, but a language does not itself need to have sentences that state any rules of inference that characterize it, and the simple languages that we consider do not. We note that the assertion set is always a proper subset of the assertability set. (11) Also, the addition of a new assertion to the assertion set (one that is not inferred) not only augments the assertability set by all its consequences but also changes the structure of the language by introducing new internal word relations.

(5) All the words of a language have some meaning. Everyone can agree that languages have an essential function, essential in the sense that if a formal system did not satisfy this function the system would be devoid of linguistic concerns. One way to characterize this function is to say that language is a means whereby people can effectively communicate with one another. In this paper we are neutral on how effective communication is best to be understood, but at a minimum one must say that this function requires (i) that speakers use meaningful words, and (ii) that speech elicits an effective public response, that is, behavior on the part of the speakers that is appropriate in the public context. To all of this we must add that the semantic formalist insists that the meaning of words derives solely from the internal relations of the language that words have to one another. We turn to a closer examination of this matter.

Internal relations and formal meaning. With the above understanding of languages, we can concretely address the formalist's notion of internal relations that words have to words. First of all, languages are made up of well-formed expressions. One can say, therefore, that in virtue of their grammatical correctness, sentences and phrases bestow on each of the words occurring in them a certain grammatical significance, and in as much as sentences and phrases share words and patterns of words, such grammatical significance is thereby compounded. Quantifiers, proper names, verbs, connectives, and so on, all have a grammatical meaning that derives from well-formedness relations among all words.

Secondly, while every sentence introduces some grammatical relation among all its words, every asserted sentence introduces a special relation among its words. Consider the words "elephant" and "tree." Every well-formed expression containing the one word is also well-formed when the other word replaces it. So, the two sentences "every elephant is an animal" and "every tree is an animal" are not distinguished by any well-formedness relations, and neither are the sentences "every elephant is a plant" and "every tree is a plant." And yet, these sentences are distinguished in the language, as one is asserted and the other is not. So, each asserted sentence introduces a special assertion relation, that is, a relation among words that is not available apart from such assertion. And, since different asserted sentences share words, a language is characterized by a network of such assertion relations that itself constitutes relations of a more comprehensive kind. (12)

Thirdly, assertable sentences also provide assertability relations. These include not only the assertion relations just mentioned but their inferential consequences as well. With this threefold understanding of internal relations, we may restate the thesis of semantic formalism:

(SF2) The meaning that the words of a language have derives solely from the well-formedness relations, the assertion relations, and the assertability relations that all the words of the language have within it.

We may thus agree with the semantic formalist that the internal relations of a language generate a certain meaning for each of its words. Grammatical meaningfulness, assertion, and assertability constitute a global context in which expressions have the meaning they have. But, against the semantic formalist, we would propose that such meaning is, as it were, but an underlying layer of meaning, distinct from the meaning proper that most words have. We propose to call the meaning generated by the internal relations of a language the formal meaning of the words of the language.

The formal meaning of the words of a language is the meaning that words, phrases, and sentences have in virtue of the well-formedness relations, the assertion relations, and the assertability relations that all words of a language have within it.

We make some observations regarding formal meaning. First, it is clear from our discussion that the formal meaning of a language is not something that comes in individual parcels assigned to individual expressions. What we have here is a full-blown holism: the language in its entirety provides each expression with its formal meaning. One way to articulate the scope of this kind of meaning is to say that the formal meaning of a word W is L(W/x), where L(W/x) is the entire language L with the word W everywhere replaced by the free variable x. Secondly, and as a consequence of what was just noted, we observe that when any word is added to the vocabulary set of a language (together with its grammatical function), the language is changed, so that the formal meaning of each well-formed expression changes, if only ever so slightly. Thirdly, we observe that the bulk of formal meaning derives almost completely from the assertion and assertability relations of a language and that the well-formedness relations contribute to the formal meaning only to a small extent. Two common nouns, or proper names, or verbs, are generally not distinguishable in meaning by well-formedness relations, as we noted above.

Ordinary meaning. Every theory of language must accommodate a distinction between two kinds of words: descriptive words, like "tree," "elephant," "write," "theory," "yellow," etc., and non-descriptive words, like "not," "and," "or," "if," "every," "some," "necessarily," and others. The distinction exists at the syntactic level, in as much as the rules of well-formedness distinguish these two groups of words. But more importantly, the distinction exists at the semantic level, in as much as all descriptive words have an ordinary meaning, while all non-descriptive words do not. Given the limited purpose of this paper, we content ourselves with the following broad characterization of ordinary meaning. (13)

The descriptive words of a language have an ordinary meaning in the sense that the speakers of the language have a publicly shared understanding of these words. Speakers take them as having a certain content and being about something.

The function of a language is effective communication, and the meaning that words have includes that speakers be able to accomplish such communication, as we discussed earlier on. We are proposing here that, in addition to formal meaning, ordinary meaning is also required to achieve this end. (In this regard, structuralists would say that the essential function of language is signification, and that words signify a certain understanding on the part of the speakers, albeit an understanding that derives from the word relations that characterize the language.) We simply note that the speakers of a language mean something quite ordinary when they speak their words. Speakers of English take the sentence "all elephants are gray" to be about elephants, and they take it to say that all elephants are gray. Moreover, they do not take the sentence to be about numbers, and they do not take the number 29 to be an example of a gray elephant.

This characterization of meaning is intended to be neutral about how such content is analytically to be understood, or how it arises in language. In particular, the characterization is intended to be compatible with the formalist's view that all meaning, including ordinary meaning, derives solely from the internal relations that words have to each other. It is clear that semantic formalism can now be restated as follows:

(SF3) The ordinary meaning that the words of a language have derives solely from the formal meaning that all the words have, that is,

(SF3b) For any situations w1 and w2 with languages L1 and L2 respectively, if the words of L1 and L2 have the same formal meaning, then the words of L1 and L2 have the same ordinary meaning.

Simple languages. We draw attention to a certain group of languages that we, for convenience, call simple languages. Such a language is one (i) whose syntax is suitably similar to that of first-order predicate logic, (14) so that the sentences of the one are translatable into sentences of the other; (ii) whose assertability set includes at least all sentences that are logical tautologies and is also closed under the rule of modus ponens; and (iii) whose assertability set is consistent in not containing two contradictory sentences S and "not S". A simple language differs from first-order predicate logic in that it contains assertions that are not logical tautologies, e.g., "some elephants live in Chicago's Lincoln Park Zoo." In addition, the descriptive words of a simple language have an ordinary meaning, while the corresponding expressions of first-order predicate logic do not.

For the sake of concreteness, we select a simple language, that we call simple English, that has as its vocabulary the suitably restricted, first-order fragment of the vocabulary of English, with the usual grammatical categories, namely: (i) proper names, (ii) personal and relative pronouns, (iii) common nouns, (iv) quantifier words, (v) adjectives, (vi) verbs combined, or not, with various prepositions or adverbs, (vii) connectives, and (viii) functional operators. (15) The well-formed expressions of this language are those that can be formed from its vocabulary by the usual rules of well-formedness of English. Although the language is restricted to the first-order fragment of English, it is nevertheless expressively powerful, adequate for mathematics, the natural and social natural sciences, and the bulk of ordinary speech. (16) What is important about simple languages such as simple English is that their vocabulary contains descriptive words with an ordinary meaning and that an important theoretical result has been proven regarding them, as we shall see.

To illustrate the preceding discussion and the argument of this paper, we consider a small fragment of simple English. The vocabulary and the assertions set of this fragment are as presented, and the rules of well-formedness and inference are the usual ones. We then apply a special method to this fragment, a method that in no way depends on the fragment's size. The details of the method are tailored to the fragment at hand, but these details are ones that can be adjusted as one considers more complicated cases.

The simple elephant-fragment

  1. Every elephant is an animal.
  2. Every elephant has one trunk.
  3. Every elephant has one tail.
  4. Every elephant has two ears.
  5. Every elephant has four feet.
  6. Some elephant has two tusks.
  7. Some elephant fears some mouse.
  8. Jumbo is an elephant.
  9. Every mouse is an animal.
10. Every elephant is not a mouse.
11. Every animal is not a trunk, tail, ear, foot, or tusk.
12. Every trunk is not a tail, ear, foot, or tusk.
13. Every tail is not an ear, foot, or tusk.
14. Every ear is not a foot or tusk.
15. Every foot is not a tusk.
16. Having is not fearing.

Interpretations and models. We are familiar with interpretations of strange or unknown words, phrases, and stories. Our interest here is the interpretation of an entire language. The following is a general characterization of this notion. An interpretation of a language is an assignment function that associates every well-formed expression of the language with some interpreted value. Words and phrases receive a value from some specified domain of objects, and sentences receive a truth-value, true or false. The items of the domain form some system of properties and relations, which in practice is either familiar and unspecified, or unfamiliar and specified. (For example, the domain may consist of the animals in the Lincoln Park Zoo.) An interpretation in the natural numbers has as its domain the set of natural numbers {0, 1, 2, 3, . . .} and assigns to each descriptive word either a natural number or an item consisting of natural numbers. All interpretations are required to assign values to expressions in accordance with the conditions that:

(i) the interpreted value of each descriptive word is some specified item from the domain; non-descriptive words need not have an interpreted value, as they function only to select a rule for determining the interpreted value of complex expressions involving them; and

(ii) the interpreted value of each phrase and sentence is an item determined by a specified rule, corresponding to the arrangement of the expression, applied to the interpreted values of the component parts of the expression.

For example, in an interpretation in the natural numbers with "elephant" and "mouse" having the values {11,12,13,14} and {21,22,23,24} respectively, the sentence "every elephant is not a mouse" has the value true, since for every number n in the domain, if n is a member of the value of "elephant" then n is not a member of the value of "mouse." A model for a language is an interpretation for the language that satisfies the assertability preservation condition: each sentence in the assertability set of the language has the interpreted value true. When the rules of inference of a language are truth-preserving, the assertability preservation condition is the same as the condition that each sentence of the assertion set has the interpreted value true.

We note that an interpretation, if it exists, does not depend for its existence on any kind of selection by the speakers of the language. (17) It is rather a necessary consequence of the similarity of the structure of the language and the structure of the domain. On the other hand, it may turn out that the speakers of a language do contingently select an interpretation of the language. We also note that an interpretation does not need to be construed as forming a component part of a language, so that a language having an interpretation is consistent with the formalists' claim that word meaning depends only on the internal relations of words.

Consider now the interpretation I. The domain is the set of natural numbers. The descriptive words of the simple elephant-fragment are assigned the values indicated, and the non-descriptive words are interpreted by rules in the standard way:

I("Jumbo") = the number 1900
I("animal") = {0,100,200,300, . . . . . , k×100, . . . }, i.e., the set of multiples of 100
I("mouse") = {800,1800,2800, . . . . . , k×1000+800, . . . }
I("elephant") = {900,1900,2900, . . . . . , k×1000+900, . . . }
I("trunk") = {901,1901,2901, . . . . . , k×1000+901, . . . }
I("tail") = {911,1911,2911, . . . . . , k×1000+911, . . . }
I("ear") = {921,922,1921,1922,. . , k×1000+921, k×1000+922, . . . }
I("foot") = {931,932,933,934,. . . . , k×1000+931, k×1000+932, k×1000+933, k×1000+934, . . . }
I("tusk") = {941,942,2941,2942,. . , k×2000+941, k×2000+942, . . . }
I("has") = the relation of having an absolute difference less than 50, i.e., | x - y | < 50
I("fears") = the relation of having a sum less than 5000, i.e., x + y < 5000
I(R one A) = I(R) with exactly one number in I(A)
I(R two A) = I(R) with exactly two numbers in I(A)
I(R four A) = I(R) with exactly four numbers in I(A)

One may easily show that this interpretation is a model for the simple elephant-fragment given above, by showing that all the sentences of the assertions set (1)-(16) are true under this interpretation, that is, correspond to indicated arithmetical truths. To illustrate, we verify the following sentences:

"every elephant is an animal" is true under I, since

        every member of I("elephant") is a member of I("animal"), since

        every k×1000 + 900 is some l×100.

"some elephant fears some mouse" is true under I, since

        some member of I("elephant") I("fears") some member of I("mouse"), since

        some k×1000 + 900 has a sum less than 5000 with some l×1000 + 800.

"every elephant has one trunk" is true under I, since

        every member of I("elephant") I("has") with exactly one number in I(trunk), since

        every member of I("elephant") has an absolute difference less than 50 with exactly one number in I(trunk),

        every k×1000 + 900 has an absolute difference less than 50 with exactly one m×1000 + 901.

The Löwenheim-Skolem theorem. This extensive example should give the semantic formalist pause. The example indicates that the formal structure of a language cannot constrain the ordinary meaning that words have. In particular, there is nothing that one can say regarding elephants that constrains an interpretation of what is said, to be about elephants. Of course, our sample language is severely incomplete. It begs for further enlargement, with a trunk-fragment, an ear-fragment, a circus-fragment, an animal-fragment, a dog-fragment, a hippopotamus-fragment, and so on. But this does not alter our claim. The numerical interpretation can always be adjusted to accommodate any such enlargement. The truth of the matter is that when one speaks in a simple language, everything that one can consistently say about anything has a numerical interpretation that makes it true.

In 1920, in the context of model theory for mathematics, Skolem proved the Löwenheim-Skolem theorem, which states that

any consistent set of formulas of first-order predicate logic has some interpretation in the natural numbers such that all the sentences in the set are true under that interpretation. (18)

Since a simple language is defined to be a consistent set of sentences that extends first-order predicate logic by the addition of asserted sentences, the theorem can be restated as follows:

any consistent set of sentences of a simple language has some interpretation in the natural numbers such that all the sentences in the set are true under that interpretation.

It has long been known that the Löwenheim-Skolem theorem generates seemingly paradoxical results for mathematical theory. On the one hand, mathematics has theorems that state truths about real numbers that are necessarily false about natural numbers, such as, "for some number n, n×n=2." Yet, according to the Löwenheim-Skolem theorem, all the theorems of mathematics, including the one just mentioned, state truths about natural numbers, under a certain interpretation whose objects are just natural numbers. An unexpected result indeed. But it must be kept in mind that the paradox is generated only in the presence of an (unnoticed) original interpretation that provides the mathematical terms with the ordinary meaning of real numbers. (19) The theorems of real number theory state truths about real numbers only to the extent that we take the terms of the theory to be about real numbers. That a similar result obtains for natural languages generally, is, therefore, not so surprising.

The application of the Löwenheim-Skolem theorem to natural languages is also not new. Putnam, in his paper Models and Reality, [. . . . . . . . . . . . . . ]



The argument against semantic formalism. We are now in a position to present the main argument. We will first list the steps of the argument and then turn to its review.

1. It is logically possible that there is a simple language L whose ordinary meaning of its descriptive words is non-numerical. That is, there is a possible situation w1 with a simple language L whose ordinary meaning of its descriptive words in non-numerical. [Premiss]

2. Every consistent set of sentences of a simple language, including the entire simple language, has a model in the natural numbers. [Löwenheim-Skolem Theorem]

3. So, the simple language L has a model in the natural numbers. [by 1,2]

4. If there is a simple language that has a model in some domain, then it is logically possible that there is a simple language that is identical to the given one in all respects except that its descriptive terms are understood by its users as being about the items of the domain, in the way indicated in the model, so that the ordinary meaning of the descriptive words of the language is as specified in the model. [Premiss]

5. So, it is logically possible that there is a simple language L* that is identical to L in all respects except that the ordinary meaning of the descriptive words of L* is numerical. That is, there is a possible situation w2 with a simple language L* that is identical to L in all respects except that the ordinary meaning of the descriptive words of L* is numerical. [by 3,4]

6. So, L and L* have the same words, the same rules of well-formedness, and the same assertability and assertions sets, and hence, the same formal meaning of their words, but the ordinary meaning of the descriptive words of L* is numerical, (and the ordinary meaning of the same descriptive words of L is non-numerical). [by 5]

7. So, if semantic formalism is true, then the ordinary meaning of each of the descriptive words of L and L* is numerical, as well as non-numerical, which is impossible. [by 6, SF3b]

8. So, semantic formalism is false. [by 7]

Review of the argument.

The first premiss, line 1, states that a certain language whose words have a certain ordinary meaning is logically possible. To verify this we need only to carefully consider what is said to ascertain that there is no contradiction. We can all image a linguistic community in which the speakers speak a simple language and in which the speakers take themselves to be talking about things other than numbers. We may suppose such a language to be simple English. So, this proves the first premiss.

The second premiss, line 4, again states that a certain language is logically possible, and again, the focus is on the ordinary meaning that the words have in that language. But this time the proposal is more complicated. By definition, a simple language is a consistent set of sentences, and by construction, any interpretation of any consistent language is consistent. The issue is whether the words of the language can have as their ordinary meaning what the interpretation specifies, and in this case, whether we can imagine a linguistic community in which a certain language is spoken and in which the speakers take themselves to be talking only about numbers. Is that logically possible? The worry here is whether there are things that one can say that cannot be taken to be about numbers. The point of the earlier example of the elephant-language fragment was to lay that worry to rest. There is nothing that one can say about elephants that cannot be taken to be about numbers. And more generally, knowing that there is a model available for a certain language lays to rest any worry about whether such an interpretation on the part of the speakers is possible. (20) And that proves the second premiss.

Eliminative semantic formalism. Our argument depends on a characterization of language as a system of words that have an ordinary meaning. Can the semantic formalist dismiss the argument by rejecting this characterization, by claiming that the words of a language do not have an ordinary meaning, that the only meaning words have is a formal meaning?

This position is significantly different from the one originally considered. Let us call the original position non-eliminative semantic formalism and the new position eliminative semantic formalism. A non-eliminativist view grants that words have an ordinary meaning, and holds that this meaning somehow derives from, or supervenes on, the formal meaning that words have. An eliminativist view, on the other hand, holds that we may think that words to have an ordinary meaning, but that this is a mistake, an illusion. Words such as "elephant," "tree," "money," "idea," "number," do not mean what we think they mean. They do not have a content. They are not about anything. Their meaning is only formal, completely defined by the internal, formal relations among words that characterize a language.

(1) We may start by noting that the eliminativist view is an incredible affront to our ordinary understanding of what languages are, as well as an affront to scientific inquiry. What reason can the eliminativist give that allows him to simply dismiss what to everyone seems to be an obvious fact about word meaning, namely that words have an ordinary meaning? We may make a comparison to an eliminativist approach in cognitive science regarding mental states. Here it may plausibly be argued that mental states as ordinarily understood are mysterious kinds of things, that seem not accessible to the scrutiny of science, and worse, are ontologically incompatible with scientific theory, and that we ought therefore to conclude that there are none such. But ordinary meaning need be nothing mysterious, and is altogether compatible with scientific theory. In fact, much scientific theory incorporates the view that ordinary meaning consists of external relations that words have to our experience of the world around us. Eliminative semantic formalism appears, therefore, to be unsupportable.

(2) But there is a more serious problem. If words have no ordinary meaning, and there are only the formal relations that words have to words, how it is possible for a language to accomplish its essential function of effective communication? If words have no ordinary meaning, then there can be nothing that is communicated. To put it another way, words that have only a formal meaning are incapable of being the instruments of communication.

Consider the following situation, under the supposition that words have no ordinary meaning but a formal meaning only. Speakers A and B speak the same language (say, first-order Mangalese), and A utters the sentence S, "se fura mangal ebarigan tah," in the presence of B. Since words have no ordinary meaning, there is no shared understanding of words among the speakers of the linguistic community. They have, then, a certain freedom in this regard. So, we will suppose that A and B correctly interpret everything that can be said in the language, including S, in completely different and incompatible ways. Specificially, we suppose that A correctly interprets S as saying that some angry elephants are approaching the village, and that he takes the appropriate actions of sounding the alarm and getting his elephant gun, and that B correctly interprets S as saying that some prime numbers are less than 50, and that he takes the appropriate actions of pouring himself a cup of tea and sitting down in comfort to calculate what those numbers are. Needless to say, both A and B are much puzzled why the other person is acting so strangely. How in this circumstance could communication have taken place, as it must in a language? Our argument then is as follows: (21)

1. Suppose there is a language whose words have no ordinary meaning, and consider a circumstance in which speaker A utters a sentence S in the presence of speaker B.

2. So, the speakers of the language do not have a shared understanding of words.

3. So, the circumstance described does not entail that A and B do not interpret the sentences of the language in certain ways.

4. So, it is possible that there is a circumstance in which

      (a) A utters S to B,
      (b) A and B correctly interpret everything that can be said in the language, including S, in completely different and incompatible ways,
      (c) A and B respond in appropriate manners M and N to sentence S,
      (d) M and N are not both appropriate responses in the language at hand.

5. But no circumstance like that is possible.

6. So, there is no language whose words have no ordinary meaning.

The eliminativist cannot avoid this refutation by claiming that the hypothetical situation described above is impossible because 4(b) by itself is impossible. A language with two incompatible and correct interpretations is indeed possible. According to the Löwenheim-Skolem theorem, a simple language has an interpretation in the natural numbers, and as for the competing interpretation, consider the ordinary interpretation that interprets the descriptive words of the simple language as referring to the items of our ordinary experience. Note how the attack is blocked when languages are taken to have an ordinary meaning: that a language has ordinary meaning does entail that its speakers do not understand its sentences in non-ordinary ways.

Further evasion. A persistent semantic formalist might reply that semantic formalism has been shown to be false only for simple languages, and that the argument does not apply to more complex languages, including natural languages such as English. This is indeed correct. The Löwenheim-Skolem theorem does not apply to higher-order languages, which permit, beyond the expressions of a simple language, the well-formedness of predications, functions, and quantifications of items of the language itself.

There are two responses to this evasion. First, semantic formalism is a general thesis regarding the philosophical problem of how word meaning is to be accounted for. This position is an alternative to accounts that say that word meaning arises from sources external to language, such as the intentions, concepts, or thoughts of the speakers, or physical objects or the experience thereof. Semantic formalism, then, is a theory of word-meaning that applies to languages generally, and we have shown that theory to be false.

Second, one must ask how one can assert a semantic formalism for a more complex language, when that position is false for a similar, simple language. What features of a complex language such as English are able to bestow ordinary meaning on words, when a very similar language without those features is devoid of any ordinary meaning? What is it about higher-level predicates such as "is a property of," "is true of," or "it is provable that," that in their presence would bestow on all descriptive words, such as "elephant," "tree," "house," and "apple," their ordinary meaning, and that in their absence leave these words devoid of this meaning? There can be no answer here. Higher-order languages add nothing toward the production of meaning beyond what simple languages have in this regard. If semantic formalism is false for simple languages, it also false for higher-order languages.

In conclusion, we have shown that the words of a language must have an ordinary meaning. We have also shown that the ordinary meaning of words cannot not derive from their formal meaning. We infer from this that the ordinary meaning of words must derive from a source outside of language. For example, a familiar and plausible view is that ordinary meaning is established through the speakers' public selection of an interpretation in the domain of the objects of the speakers' publicly shared private experience. (22) But it was not the purpose of this paper to enter into the difficult debates on this side of the matter. At least we may conclude that, since semantic formalism has been shown to be false, every competing theory is thereby strengthened.


Arnold vander Nat
Loyola University Chicago
Fall, 1999



ENDNOTES

1. Saussure in his Course in General Linguistics (transl. 1966) says that "language is a system of interdependent terms in which the value of each term results solely from the simultaneous presence of the others" (p.114), and "language is characterized as a system based entirely on the opposition of its concrete units" (p.107), and "each linguistic term derives its value from its opposition to all other terms" (p.88).

2. Structuralists have been explicit in this rejection: Hawkes asserts that "a language . . . does not construct its formations of words by reference to the patters of 'reality', but on the basis of its own internal and self-sufficient rules", and "The word 'dog' exists, and functions with the structure of the English Language, without reference to any four-legged barking creature's real existence" (1997, p. 17). Likewise, Culler asserts that "since the sign has no necessary core which must persist, it must be defined as a relational entity, in its relation to other signs" (1976, p.36).

3. We simply note that since languages exist only in relation to linguistic communities, the kind of meaning that is at issue here is the public meaning that words have in a linguistic community.

4. One cannot simply reject SF1b as being false because one supposes that one can imagine the languages as having different meanings in the two situations. Such a response surely begs the question. If the semantic formalist is right, then one cannot imagine the languages having different meanings. What is needed is a non-question-begging proof.

5. Following Wittgenstein, we may think of such relations as little rules that contribute to the specification of what one is permitted to assert.

6. Cf. E. Mendelson, Introduction to Mathematical Logic, 2nd. ed., D. Van Nostrand, 1979, pp. 29ff. and 58ff.

7. Rules such as those that make up the transformational grammars introduced by N. Chomsky in Aspects of the Theory of Syntax, M.I.T. Press, 1965, and by others.

8. Languages are in one sense, therefore, a fixed (infinite) collection of well-formed expressions. But in another sense they are not fixed: languages change over time, as words, grammatical constructions, and assertions are added, or deleted. For our purposes such changes define new languages.

9. Assertions are to be distinguished from the tentative assertive claims and hypotheses that individual speakers may make and later revise. Such sentences pertain more to the dynamic aspect of language, a consideration of which does not affect the results of our inquiry.

10. There is, of course, another sense in which some sentences are assertable, namely, that they may be asserted, not on the basis of something previously asserted, but on something else, such as, belief based on thought and experience. Since this kind of assertability introduces a source external to language and is rejected by the semantic formalist, we will not include it in our discussion.

11. We note also that the assertability set of a language is closed under the rules of inference that characterize it, while the assertion subset is not. Actually, there are some technical distinctions one can make here. The assertability set can typically be described as having various subsets: (i) a subset of the assertion set (axioms), of sentences that are taken by the speakers to be necessary; (ii) a subset of the assertion set (postulates), of sentences that are taken by the speakers to be true but not necessary; (iii) the closure of (i) under some rules of inference (such as necessitation, modus ponens, universal instantiation, and universal generalization); (iv) the closure of (i), (ii), and (iii) under some rules of inference (such as modus ponens and universal instantiation).

12. It is in the context of assertion relations that authors sometimes say that the terms of a theory are contextually defined. We have no quarrel with such a distinction, provided it is understood that the meaning so derived is a formal meaning only.

13. It is not useful here to make the important distinction between the sense and reference of words (or their connotation and denotation, or their intension and extension), since these aspects of meaning involve external relations, an appeal to which would beg the question against the formalist. So too with present discussions the of narrow versus wide content of words. In any event, our argument does not depend on such distinctions. For such distinctions, see for example [. . . . . . . . . . . . . .]

14. A first-order language is one in which quantification and predication is permitted with respect to individual objects only, permitting sentences such as "some elephants are scared of any mouse that confront them at any time and any place." In a higher-order language, predications, quantifications, and functions of items of the language itself are permitted as well, permitting sentences such as "some elephants have some characteristics that are not desirable," "some facts about elephants cannot be proven," and "the word 'elephant' is the first word in the sentence 'elephants exist'."

15. Proper names and common nouns are often referenced by pronouns. To accomplish such reference, names, nouns, and pronouns are (tacitly) associated with an index, either a constant index (a1, a2, a3, . . . ), or a variable index (x1, x2, x3, . . . ). Proper names and pronouns referring to them are associated with the same constant index, and common nouns and pronouns referring to them are associated with the same variable index, for example: "some mouse x chased Dumbo a even though he a was friendly to it x."

16. For example, the following sentence is a truth of Simple English: "If you and I are lovely people and have shoes, and every person wants whatever any person has, and has what is given to him, and if, moreover, all God's children are nasty people but he, being generous, nevertheless gives them whatever they want, then all God's children have shoes." This example is an exercise in Leblanc and Wisdom, Deductive Logic, Allyn and Bacon, 1972, p.235.

17. Interpretations exist independently of the speakers' selection of them. Interpretations consist of a domain of objects and a correspondence between words and objects. A vast number of domains exist, of elephants, of numbers, of atoms, of ideas, of words, of elephants and numbers, and so on, and one may or may not be selected. Likewise, a vast number of correspondences exist, and one may or may not be selected. For example, the first five words of this sentence have a correspondence with the numbers 4½, 10½, 4½, 7½, 6, under the one of many functions, 1½×size.

18. Cf. Alonzo Church, Introduction Mathematical Logic, Princeton Univ. Press, 1956, p. 244. Cf. also W. V. Quine, Methods of Logic, Holt, Rinehart, and Winston, 1959, p. 259.

19. W. V. Quine succinctly describes the situation: "taking U as the universe of real numbers, we are told that the truths about real numbers can by a reinterpretation be carried over into truths about positive integers. This consequence has been viewed as paradoxical, in the light of Cantor's proof that the real numbers cannot be exhaustively correlated with integers. But the air of paradox may be dispelled by this reflection: whatever disparities between real numbers and integers may be guaranteed in those original truths about real numbers, the guarantees are themselves revised in the reinterpretation. In a word and in general, the force of the Löwenheim-Skolem theorem is that the narrowly logical structure of a theory - the structure reflected in quantification and truth functions, in abstraction from any special predicates - is insufficient to distinguish its objects from the positive integers." [Methods of Logic, p. 259f.]

20. The possible language L2 being considered here must be properly understood. L2 is described as one in which the users of the language understand the words to be about numbers. But in a special sense. This ordinary, numerical meaning is modeled after the specifications of the interpretation at issue. L2 need not be a language about all the natural numbers, and the truths in the assertability set of L2 do not include even all the typical truths about natural numbers. The descriptive words of L2 are about those numbers that belong to the interpretation of the descriptive terms of L1, and this set, depending on the composition of L1, could be a much diminished set of natural numbers. And the truths of the assertion set of L2 are a select group of arithmetical truths that correspond to the relations expressed in the assertion set of L1, so that the assertable set of consequences is restricted to a very select set of arithmetical truths. (Think here of the meager set of numbers and arithmetical truths used in the simple elephant-fragment.) The point is that while the language L2 has an ordinary meaning of numbers and numerical operations and relations, it is not a language in which even kindergarten arithmetic can be done.

21. The line of argument used here can be found in standard refutations of behavioristic accounts, that matters of appropriateness must make an appeal to intensional items.

22. In the present context we may refer to this view as the selected interpretation view. On this view, ordinary meaning is established through a complex, recursive, process of public selection. On one front, a domain of objects is selected. The speakers of a language take themselves to be talking about the world of things that they find in their own experience, and things that pertain thereto. Nature has seen to it that this world is publicly shared. On another front, speakers select a correspondence between their words and the objects of this domain. This process includes that speakers publicly select certain items of their experience to be instances of words. When we say, "that is an elephant," we thereby select the bulky object before us as an instance of "elephant," as well as anything else that looks like it. (Such instantiation is much facilitated through the use of indexical words, such as, "this," "that," "here," "there," "now," "then," "he," "I," "you," and so on, whose reference changes from use to use.) Another part of this process is that speakers sometimes explicate the meaning of words through definition, or description, or some performance. The process includes as well the ongoing task of inductively learning what other speakers mean by their words. And it includes the public transmission of such selections, that is, the continued use of words in accordance with such selections. On a third front, the assertions of the language are publicly selected under a constraint of truth, in the sense that assertions are made on the basis of the speakers' experience of what obtains regarding the domain. A causal account that relates the speakers' personal experience of the domain to what obtains in the domain can guarantee that the assertions made in the language correspond to truths about the domain. Together the parts of this selection process constitute an approximate interpretation of the language.