24

In computer science, programming languages can be described in terms of "Turing completeness", basically, whether a programming language is capable of expressing any* algorithm. A non-Turing-complete programming language has notable limitations in terms of what ideas it can express.

Does such an idea also exist for spoken languages, as some measure of empirically describing the limitations of the ideas a language can be used to express?

My first thought as to a way of answering this question was to try to think of an idea that couldn't be expressed in a given language, quite the feat given the thinking itself happens largely within the confines of that language. I stumbled on ideas related to alternative versions of reality, for example, Had he not gone back in time to change the past, he wouldn't have eventually had to avoid his younger self in the future. English seems to have the tenses to cover such a situation, but it's not difficult to imagine a language that doesn't.

*This is an oversimplification of course. More accurately, a Turing-complete language can be used to express a Turing machine, which in itself can solve most any problem, but caveats abound

  • 4
    I feel any answer can at best be a somwhat subjective analogy, so I'll just offer mine as a comment... The theory of universal grammar postulates that any viable natural language ("viable" can be compared to "useful" in term of how a programming language must be Turing-complete to be of much use) is, among other things, going to be recursive, but this has been claimed not to be the case about at least one language, namely Pirahã. – LjL Nov 06 '17 at 21:23
  • 2
    Do languages "spoken" by animals such as the various types (https://music.stackexchange.com/a/6471/19096) of birdsong count?

    If you do feel compelled to say "wait, birdsong is not enough of a language to qualify here" we might be on to something.

    – Tobia Tesan Nov 06 '17 at 22:20
  • @TobiaTesan By necessity, there has to be room for a language to be a language without meeting the pseudo-Turing standard – TheEnvironmentalist Nov 06 '17 at 23:03
  • 7
    @LjL: The Piraha recursivity claim is a very academic dispute. No one claims that it is impossible to express recursive concepts using the Piraha language: the claim is that recursion is not required to describe its grammar rules. For comparison, there are languages that don't have grammatical gender, but that doesn't mean that you can't talk about the concept of gender in those languages. – brass tacks Nov 07 '17 at 03:49
  • 3
    @LjL: E.g. Piraha supposedly, rather than embedding sentences, uses separate sentences to express ideas like the following (taken from Wikipedia): 'Everett stated that Pirahã cannot say "John's brother's house" but must say, "John has a brother. This brother has a house." in two separate sentences.' – brass tacks Nov 07 '17 at 03:51
  • Look at the criteria that a pidgin does not meet. – Adam Bittlingmayer Nov 11 '17 at 07:50
  • In semantics, FOL is to logical reasoning what Turing completeness is to computability, but this is not confined to linguistics. – Atamiri Nov 23 '17 at 15:37

4 Answers4

14

In the realm of natural language, the "ideas a language can be used to express" are basically "any": all languages are capable of expressing any idea, so there's only one category of expressive type. Languages do differ in the way that they express a given idea. Assume a language Gwambomambo which lacks the word "recursion". That very word could be introduced into the language, just as "ballet" or "ghee" was introduced into English; or, a word might be invented using traditional roots of the language (e.g. "thing sits on itself"). Some languages have specific tenses for negative propositions and some use words like "not" to convey the idea; some languages have different forms of words to indicate that there is just one, or many, or maybe even just two, of the thing in question – other languages don't do this but do allow you to say "1 child", "many children", "more than 1 child" etc.

The greatest disparity between languages is lexical differences, that is the fact that we need a long expression to convey the notion "2 year old male reindeer", whereas languages spoken by reindeer-herding cultures usually have a single word for this.

user6726
  • 83,066
  • 4
  • 63
  • 181
  • 3
    This is largely the case in programming languages as well, as nearly every programming language is Turing-complete, as it takes very little in terms of language features to be Turing-complete, even if as in spoken languages some ideas may take a bit of roundabout language. However, there are a few examples of languages that emerged under very specific circumstances so as not to meet this set of requirements. Clearly, any language spoken by a modern society would have these features, maybe I'm hunting for a very simple language that lacks sufficient complexity to meet such a standard. – TheEnvironmentalist Nov 06 '17 at 21:05
  • 3
    @TheEnvironmentalist I think any true human language is likely not to be that simple, because language of the necessary complexity emerged early enough in our evolution that it's universal. You might have to look at something like the calls of primates. – Barmar Nov 06 '17 at 22:56
  • 6
    I think the statement that any language can express any idea is questionable. To pick a random mathematical example, I can say in modern English that the entropy of a discrete random variable is always non-negative. But could I have said that in 19th century English? I think the only possible way would be to define the concepts of random variable and entropy, a significant undertaking whose result is to change the language, at least among technically oriented people for whom these concepts are meaningful. – N. Virgo Nov 07 '17 at 04:41
  • We see this in technical fields all the time, that changes in language make it possible to express concepts that simply weren't expressible before. – N. Virgo Nov 07 '17 at 04:41
  • @Nathaniel, I assume you're referring to the word "entropy": exactly like my example of "recursion". If there is no lexical item, you can still describe the situation you're referring to. It is not that the ideas weren't expressible, it's that they weren't expressible as efficiently. They had not yet become conventionalized into concepts. – user6726 Nov 07 '17 at 05:30
  • @user6726 I'm not referring just to the word entropy, but to the following idea: "the entropy of a discrete random variable is always non-negative". The point is that one can't just "describe the situation you're referring to", because the idea isn't expressible without having the words to express it. The only way to express such an idea is to change the language such that it can be expressed. – N. Virgo Nov 07 '17 at 05:37
  • Probably my entropy example is not abstract enough to get the point across. It would have been better to use the old classic, "a monad is just a monoid in the category of endofunctors." That one, I am confident, cannot be explained unless the listener has invested a substantial amount of time learning the language of category theory. – N. Virgo Nov 07 '17 at 05:38
  • 3
    @Nathaniel I think user6726's point is that if a textbook can be written that brings one from not understanding monads to understanding monads, using only words, then the concept of monads can be expressed, even if such a description would, without the right terms, be so inefficient as to require a textbook to explain it. An interesting tangent to take then would be to evaluate the efficiency of expressing various ideas in different languages. The original question though, was if there exist languages which are incapable of expressing certain ideas, with any finite amount of text – TheEnvironmentalist Nov 07 '17 at 05:56
  • @TheEnvironmentalist a sufficiently large textbook could take me from understanding English to understanding French as well, assuming I'm willing to put enough time into studying it. (Probably I would only end up understanding written French if I were to be foolish enough to attempt this, but that doesn't cause a problem for my point here.) Thus, a textbook could teach me the knowledge to understand any concept that can be expressed in French, albeit possibly quite inefficiently. – N. Virgo Nov 07 '17 at 06:09
  • Given this, your question seems to come down to whether there exists a natural language for which this is not the case, i.e. if I speak language X, there can exist no textbook written in language X from which I could learn English or French. (Well, since not all languages have a written form we should probably replace the textbook with a series of spoken lectures.) While not an expert on the topic, I would be willing to bet the answer is no. – N. Virgo Nov 07 '17 at 06:12
  • (I note in passing that this is related to the compiler theorem in computer science.) – N. Virgo Nov 07 '17 at 06:15
  • @Nathaniel Not another natural language, but some particular natural language. Notably, some ideas of Japanese are nearly impossible to describe to native English speakers, and seem to require practice with native speakers to build an intuition. If a language doesn't allow one to learn specific features in another language, you become trapped in the monad of limited expressiveness, incapable of describing your way out. – TheEnvironmentalist Nov 07 '17 at 06:15
  • 3
    "...all languages are capable of expressing any idea..." - I'm no linguist, but I'm very curious about this point. How do you mean, or, where does this notion come from? Is this a technical statement about linguistics, or an epistemological one about the philosophy of language? –  Nov 07 '17 at 06:18
  • @TheEnvironmentalist You are correct, where I said "another" I should have said "some particular other". I have plenty of personal experience of how difficult it is to learn Japanese as a native English speaker, but by your argument above this just means that concepts unique to Japanese can only be explained much less efficiently to an English speaker, not that they can't be expressed at all. I think we are probably not disagreeing here - my point to user6726 was just that English + category theory is quite a different language than English alone, and that it has additional expressiveness. – N. Virgo Nov 07 '17 at 06:20
  • @Nathaniel My point with Japanese was that some aspects of the language are tacit knowledge, in that they can only be uncovered through experience with trial and error using the mind's powerful machinery for intuition, rather than being explainable through the pseudo-Turing-incomplete language – TheEnvironmentalist Nov 07 '17 at 06:25
  • @TheEnvironmentalist that's certainly true of Japanese, although the examples I know of are all at the level of grammar. (An analogy: a Haskell compiler will never understand the meaning of GOTO, but it can still compute the same set of functions that a BASIC interpreter can.) I do not know whether the same kind of thing can exist in natural language at the level of a bona fide concept, or even whether there is a precise distinction between these things. Thus I am convinced your question is a good one, and I hope you get a well sourced answer that addresses these points. – N. Virgo Nov 07 '17 at 06:43
  • @TheEnvironmentalist I'm a professional Japanese translator and a linguist, and I've never found those mysterious indescribable ideas you speak of. I (and any other competent translator) can translate any book in Japanese, Old or Modern, into intelligible English, in such a manner that you'd be freely able to discuss the ideas in the book with a Japanese person who read the original. I therefore take Japanese to be equivalently expressive to English or my native Portuguese or any other natural language. There's nothing magic or unusual about it. – melissa_boiko Nov 10 '17 at 09:20
  • 1
    Further, while my experience as a language teacher is limited, I'm certain that I, or any other Japanese-speaking linguist, can explain (describe/put in words) any word, concept or grammar pattern of the language to an English speaker using English (or to a Portuguese speaker using Portuguese, etc). That of course doesn't mean they will acquire the skill of using the language; but this is also the case for all languages, acquiring the skill ≠ explicit explanation. Reading about a language is like reading about riding a bicycle. Still, conscious analysis is perfectly doable. – melissa_boiko Nov 10 '17 at 09:26
  • @leoboiko You misunderstood my point about Japanese. I did not mean there were aspects to the language which are not translatable to English or any other "pseudo-Turing-complete" language, I meant there may be some aspects to the language that are not teachable strictly in words, but require experience beyond explanation to grasp. In other words, I'd theorize you could perhaps never, after reading any number of books or spending any number of years in solo practice, master Japanese from books alone, even if it were your sole life pursuit, without actual conversation with a native – TheEnvironmentalist Nov 10 '17 at 09:43
  • 1
    @TheEnvironmentalist I disagree; I could read any English book and understand any movie by age 20 without ever talking in English with anyone, much less a native (I learned it strictly from books and text-based videogames). I also submit that we are able to learn Latin or Sanskrit from books and understand any book written in them, no matter how subtle, poetic or philosophical, even though there are no native speakers left for us to speak to. And, finally, we can also read Old Japanese and Old English (languages distinct from the modern ones) without a time machine. – melissa_boiko Nov 10 '17 at 12:21
  • (though, again, I want to emphasize the difference between learning about a language consciously, and acquiring the skill of using it, which is a process that happens unconsciously and is a natural faculty of the brain. And, sure, speaking with people is the most natural environment for acquisition to occur; extensive reading is not. But this is not because some words or ideas are impossible to explain in other languages (which is false); it's just a consequence of the environment in which the faculty of language acquisition evolved.) – melissa_boiko Nov 10 '17 at 12:25
  • @Nathaniel: Suppose I use 19th-century English (which presumably already had vocabulary for functions and sets) to define sigma-algebras, probability measures, random variables, entropy and whatever necessary concepts in-between that need to be expressed, then proceeded to state that sentence. Could you say I have expressed the idea? – WavesWashSands Nov 10 '17 at 16:19
  • @WavesWashSands yes you would have done --- my point is that in doing so you would have changed the language. – N. Virgo Nov 11 '17 at 00:00
  • I downvote this because definitely no language can explain what is the strawberry smell to a person who never smelt strawberries. – Anixx Feb 23 '20 at 14:33
11

In computer science, one essential property of all Turing-complete languages is that they are able to describe, "in their own way", how they themselves work.

For example, you can use a Turing machine to express how a Turing machine works.

Similarly, you can write, for example, a Prolog program that can interpret Prolog programs.

In the linguistic realm, it would seem to me that an analogon of "Turing completeness" is a language that can express concepts of the language itself.

For example, you can use English to describe English grammar, English words etc. Similarly, you can use Latin to describe Latin grammar. But you will in all likelihood not be able to use a language consisting of, say, 3 different whistled tones to describe how the tones themselves are constructed and related. This follows from the simple fact that the vocabulary and grammar are too restricted to talk about anything on the meta-level, or at least under reasonable assumptions (note that you can encode any information even with a single whistled tone, by reasoning about the time intervals between different occurrences of the tone).

In computer science, a Turing machine that describes, in general terms, how a Turing machine works, is called a universal Turing machine. Characteristically, a universal Turing machine is able to execute its own description, arbitrarily deeply layered.

So, as a linguistic analogy, a language that can be used to express properties of its own constructs could maybe be called a universal language?

In any case, I think this reflective ability is a good test to see whether a language, any language, is expressive enough also for other important tasks.

mat
  • 211
  • 1
  • 5
  • 6
    ... and then the philosopher said, "is not a physicist simply a collection of atoms attempting to understand other atoms?" – Aaron Nov 07 '17 at 03:01
  • 1
    Interesting how we took opposite approaches. I looked as far outward as I could, trying to find complex, obscure situations that a language may not be capable of handling. You looked as far inward as you could, asking a language to describe itself. – TheEnvironmentalist Nov 07 '17 at 05:58
  • 1
    Holistic and reductionist viewpoints? :) I'm reading Gödel, Escher, Bach at the moment, this question caught my eye since a lot of the book refers to similar concepts. https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach

    People viewing this question and subsequent answers may find it deeply interesting, as I have.

    – Olical Nov 07 '17 at 11:13
  • 1
    Two whistled tones -- that's the FSK my old MODEM used. Don't confuse the medium with the message. – amI Nov 07 '17 at 21:13
3

I have had the same thought before and this is what I have found.

There are two main concerns. Semantic completeness and grammatical completeness.

Semantics: A language needs a minimum set of meanings, but the lack of some random arbitrary noun representing an abstract or concrete thing does not make the language incomplete in terms of thought. If the language lacks such a word, it's likely due to the thing being unknown to the speakers of that language. A noun is interchangeable in the structure of a language so you can simply add a noun for it and voila they can think about it. To have the thought of a cup, you must know of cups, but that is not a limit of your language. We are only concerned with the ability have thoughts, ie. fundamental semantic structures.

For semantic completeness, I refer to the Semantic Meta Language. There are 62 primes. For instance, the word 'kill' can be defined in these primes as follows. An explication is a breakdown of a non-prime concept into prime ones.

Someone X killed someone Y:
someone X did something to someone else Y
because of this, something happened to Y at the same time
because of this, something happened to Y's body
because of this, after this Y was not living anymore

Source: https://en.wikipedia.org/wiki/Natural_semantic_metalanguage

I also think Cognitive Linguistics might have insight here. A language surely must have ways to express all the image schemas, for example as containment "in".

Grammar: I do not know yet, but I imagine there is a list somewhere. If it hasn't been constructed, it really ought to be! For instance, a language must have a way to determine which word in a sentence is the subject and which word is the object.

Clearly there is a minimum set, or else creoles would not differ so much in their sophistication from their parent pidgins.

Finally I want to add that though I mentioned nouns and verbs need not be there for 'thought completeness', perhaps they do for a sort of minimum human completeness. There are certain concepts which are found in all languages because all people deal with them no matter how different they are. Words for basic things like 'people', 'animal', emotions, basic colours, basic objects and animals 'tree', 'bird', basic verbs like 'run' and 'sleep'. Sky. Ground. For a good example of such words, you can check out the Swadesh list. However, the Swadesh list is not exhaustive, just a selection. A more exhaustive set of such concepts could be constructed.

curiousdannii
  • 6,193
  • 5
  • 26
  • 48
Sylar
  • 71
  • 3
  • NSM actually is a grammar as well. The primes are put into word classes and have particular subcategorisation frames. – curiousdannii Nov 10 '17 at 10:14
  • 'For instance, a language must have a way to determine which word in a sentence is the subject and which word is the object.' I'm not sure that's necessary. Grammatical relations exist to encode stuff like topicality and semantic roles in a more compact manner, but I don't see why a language can't be 'complete' in some sense but mark these more primitive categories directly, without the intermediate level of grammatical relations. This language will probably be less efficient, but I doubt its expressiveness will be compromised. – WavesWashSands Nov 10 '17 at 16:08
  • Hm. I don't follow. Can you give me an example? – Sylar Nov 10 '17 at 16:18
  • "A language needs a minimum set of meanings" my 5 cts here are that language expresses learned concepts. – J. Doe Nov 23 '17 at 11:50
-1

"A language needs a minimum set of meanings" as said above!

Here are my 5 cts here are that language expresses lernt concepts "stored" in our minds.

So, there is the real world and for its objects we can find symbols. Or there are social interactions which can also be expressed by a word. More complex are introspective tokens expressing mood/feeling for example, this I think are best represented in most languages.

So, there has been a folk having no concept of ownership/money, so the whole set of economics related terms was absent and could not expressed. So their learnt set of concepts got extended, so the language.

Have a look on language extensions introduced with the car age: I am sure there will be phrases and expressions which appeared only later, like "gib Gas" in German meaning "go faster" also without relation to driving, but actually coming from "hit the gas pedal". Possibly, in the future age of electric cars this will be outdated and disappear again.

My final reference here are semiotics - science giving us foundations about how we express our symbols in the language and link both.

J. Doe
  • 195
  • 10