chinese people tend to doavoid“4”because it sounds

Just a moment...
Please turn JavaScript on and reload the page.
Checking your browser before accessing .
This process is automatic. Your browser will redirect to your requested content shortly.
Please allow up to 5 seconds&
Ray ID: 21f550fa68c820fcIn the experimental linguistics community there’s an ongoing debate on the type of speech used for analyses. On the one hand are laboratory linguists working with tightly controlled, scripted speech. On the other we have linguists engaging with conversational interaction, or interested in more “natural” speech, who record spontaneous speech in informal settings. Many studies are, of course, somewhere in between these two extremes, but, at least in the phonetics community, there’s a tendency for researchers to prefer one of the two kinds of data ().
The thing is, however, that studies from different branches of experimental linguistics have shown that the settings in which the speech is recorded has a large influence on our results (). My own work, which I’m presenting at the UKLVC conference in York this week, shows how uptalk rises changes significantly in different speech styles (read more about uptalk
and ). So in, say, laboratory settings, are we recording how uptalk is used in actual, everyday speech, or in the kind of formal speech you use in research settings?
To answer this question we first need to decide on what “actual, everyday speech” really is. Sociolinguists have known for a long time that everyday conversations are made up of a network of different speech styles, and that different dialects, ethnolects or even languages are used most people’s daily conversations (, ). This type of knowledge makes it harder for us to maintain a clear distinction between formal “lab styles” and informal “everyday styles”.
It also makes it difficult to make a clear choice as to which kind of speech style is preferable to experimental linguists. Both ends of the spectrum have their advantages – lab speech makes it easer to focus on specific phenomena and more fine-grained processes without the added influence of other variables such as faster speaking rates or accommodation to the speech of other people in the conversation. It’s also good for researchers working on very rare phenomena, as they’re able to ensure that their data sample will contain enough specimens to conduct their analyses. Spontaneous speech can shift the focus to broader trends in the data set, and lets the linguist observe exactly these other variables which take place in more informal speech styles.
It may be that what we experimentalists need is not so much a certain type of speech data, but rather an increased awareness of the kinds of speech styles we elicit and how they influence our results (see
for a thorough discussion of this topic). The key to understanding speech is, I think, to understand it in all its complexity. So my best suggestion for a solution to this debate is not to pick out one speech style over another, but rather to make sure we have lots of research on different speech styles, and on the way they affect our conversations, and our linguistic results.
Category: , .
Tagged: , , , .
One of the linguistic stories that’s been doing the news round this week has been about the
language of Ku?k?y in Northern Turkey. It’s one of several documented , typically used by peoples living in mountainous or forested terrain to communicate over distances. Such languages aren’t the community’s only language, but are based on their spoken language, like Turkish. Now, the story this week didn’t concern the discovery of this language – researchers have known about it for decades. They were actually some new findings about how speakers’ (or whistlers’) brains are functioning when when they speak (or whistle).
When we hear about such radically different linguistic systems from our own, we’re often amazed, and intrigued.
How is it possible? The interesting thing is that we are perhaps really asking: how is it a possible language? Because we have no doubt that it is a language. How do we know this?
Well, there are two ways of viewing language. One is the code-based model, in which language is based on association – between a signal and something in the world, and between a signal and response. The word ‘cat’, for instance, is associated with something in the world, namely all cats. Whistled languages are clearly possible from this perspective, because the whistles they are associated with a meaning, which is something out there in the world, albeit out there in people’s minds. (And at this point lets not get into the debate as to how complex this system of associations has to be to qualify as ‘language’). Messages are ‘encoded’ by speakers and ‘decoded’ by hearers.
CC Living in Monrovia
The other view is the ostensive-inferential communication model. , from a cognitive perspective, and , from an evolutionary perspective, would argue that this is the key to human communication – and human language. Indeed, it is what makes human language what it is, and so very different from any animal communication system. The idea is this: what enables us to use the conventions of language (the phonemes, morphemes, and syntax of English, or whatever other variety we happen to have learnt), is our amazingly prosocial nature. We are able to use signals intentionally to communicate a message. And not just with the intention that the hearer (or viewer, if we are making a gesture) understands our signal, but rather that they recognise that we intend to make a signal and that we intend for them to recognise our intention (that’s the ostensive-inferential bit).
This means, to take Scott-Phillips’ example, that tilting our coffee cup towards a waitress in a café whilst catching her eye is understood as a request for a top- tilting our cup as a result of an animated discussion with our companion, on the other hand, is not taken by the waitress as any sort of communication on our part. When we use language, it’s just the same. Language (words and grammar) lets us get across much more precisely and extensively our meaning, but it is always underdetermined (unlike mathematical languages, for instance) – we can never articulate every single aspect of meaning that we intend to communicate in that context, and frequently we don’t even try, knowing that our interlocutor can ‘fill in the rest’.
CC James Broad
So, back to our whistled languages. We recognise them as languages because they do use a complex inventory of sounds and structure to encode a message (the code aspect of language), and because they are used as language (the ostensive-inferential aspect). The whistled messages are intentionally directed to a hearer for them to recognise the whistler’s intention and their intended meaning. (And, incidentally, if you wanted to say that they aren’t languages proper, this still makes them much more like language then other animal communication systems, just as with our gestures).
If you’ve still got a minute, why not watch the video of some ku? dili whistlers , and appreciate what an ingenious linguistic adaptation it is!
References
Scott-Phillips, T. (2014). Speaking Our Minds: Why human communication is different, and how language evolved to make it special. Palgrave MacMillan.
Sperber, D., & Wilson, D. (1995). Relevance: Communication and Cognition.
Category: , .
Tagged: , , .
rather strongly implies that certain ways of speaking currently found frequently amongst young women are a sign of weakness. Women should abandon them, Wolf argues, and adopt “stronger” modes of speech. Types of speech Wolf attacks include “uptalk” and “vocal fry” (more on what these actually are very shortly). This struck me as interesting because, from a linguistic point of view, there’s not obviously anything particularly “weak” about talking in these ways.
“Uptalk” is the one you could make the best case for being “weak”. () Uptalk is the use of a “questioning” intonation with statements: in physical terms, this means a rising pitch at the end of a sentence. Now, to a certain degree there is a non-arbitrary association of high pitch with smallness and weakness: very roughly speaking, small animals like mice and bats tend to make higher-pitched noises than big animals like elephants and humpback whales. And amongst humans, larger people may (in general) tend to have deeper voices than smaller people, which is an effect of tending to have larger voice boxes.
So, if we assume that smaller things are weaker than bigger things, it follows more-or-less naturally that higher-sounding things will tend to be weaker, or be perceived as weaker, than lower-sounding things. But there are very much a lot of tendencies and generalities worked into the assumptions required to make this argument, and it’s certainly not a straightforward fact of the natural world that if something makes a high-pitched noise it’s therefore weak.
What I found really interesting is that vocal fry (or “creaky voice”) is in some senses the opposite of uptalk. Whilst uptalk involves the voice rising to a high pitch, or high-frequency vibration of the vocal folds, vocal fry involves very low frequencies of vocal fold vibration. Our perceptions of pitch tend to relate to frequency of vocal fold vibration, so vocal fry often sounds quite deep.
But if we can make something of an argument for the association of uptalk with weakness not being entirely arbitrary, it’s difficult to see how we can make the same argument for the association of weakness with vocal fry. If anything, the same (still slightly dodgy) line of argumentation should lead us to associate vocal fry with strength: bigger, stronger things make deeper noises.
Ultimately, then, there isn’t anything inherently “weak” about speaking in this way. Any connections between weakness and vocal fry
and this is probably largely true with uptalk too
are basically arbitrary: socially-conditioned perceptions rather than scientifically establishable biological or physical facts. There is nothing objectively weak about certain young women speaking the way they do. Rather, certain people’s existing tendency to associate the young and the female with weakness leads, in turn, to them associating weakness with stereotypically young, female modes of speech. It seems rather unfair to blame young women themselves for this.
Category: , .
Tagged: , , , , , , .
Since the advent of widely available and powerful computers, much of the work done in sociolinguistics and historical linguistics has used the methodology of corpus linguistics. A ‘corpus’ (from the Latin for ‘body’) technically speaking is simply a big collection of texts, and once upon a time would have referred to printed texts. In other contexts we would still talk about ‘the corpus of Old English poetry’, for example, to refer to all surviving Old English poetry, or ‘the corpus of Plutarch’ to refer to everything written by Plutarch. In linguistics, however, ‘corpus’ has come to have a specialised meaning: a digital collection of language data which can be easily searched and quantified. ‘Corpus linguistics’, then, refers to the methods for undertaking research on such corpora.
Here’s a little example of the sort of research I mean. Some verbs in English have two possible forms of the past tense, an irregular one with a -t and a regular one with an -ed. Two of these are spell (spelled vs. spelt) and spill (spilled vs. spilt). There’s a long tendency in the history of English for irregular verbs to become regular, so we might guess that these two possible forms reflect ongoing change towards the regular form. There’s also a general tendency for writing to be more conservative (that is, old fashioned) than speech during ongoing change. So we have a hypothesis we can test: in a corpus of modern day written and spoken English, for these two verbs, the irregular forms will be more common in writing than they are in speech. I’ve done a quick search of the British National Corpus to test this hypothesis—here are the results:
As you can see, our hypothesis turned out to be completely wrong. For both verbs, the irregular form is much more common in speech than in writing. A chi-squared test tells us that this difference is significant for spill (χ?=56.578, df=1, p=0) and spell (χ?=67.143, df=1, p=0): that implies that there is a real difference between the ways speakers choose which form to use when writing and when speaking for both of these verbs, but that the difference is the opposite of what we predicted.
One apparent advantage of corpus linguistics is that it offers quick ways to approach very open-ended questions without first having to formulate specific hypotheses in the way we did above. Questions like ‘what are the differences between spoken and written English?’ or ‘how has English changed between today and twenty years ago?’ would normally be very hard to answer directly. With corpus linguistics, however, we can quickly process very large amounts of data to trawl for such differences by using ‘keyword analysis’.
Keyword analysis simply looks for words that are more frequent in one corpus than another. Because two corpora are unlikely to be exactly the same size, with keyword analysis we don’t look at raw frequencies of words—it wouldn’t be surprising or interesting that a word was more frequent in a million word corpus than a thousand word corpus. Instead, we look at relative frequencies: effectively, the percentage of all words represented by the word of interest. Another way of thinking of these relative frequencies is as the frequency of the word of interest per thousand words (or per million words, or whatever). Words that have a significantly different relative frequency in one corpus than in another are then called keywords.
To take one of our examples above, we might predict that spilled would be a keyword for the written BNC compared with the spoken BNC:
87,953,932
10,409,851
4.696 x 10-6
5.764 x 10–7
SPILLED per million words
Here we can see, as expected, that spilled is much more frequent in written texts, occurring more than eight times as frequently as in spoken data. This difference is statistically significant (χ?=31.079, df=1, p=0), making spilled a keyword of the written BNC compared with the spoken BNC. If you do the maths, you’ll find the same is true for the other three words we’ve looked at.
As we’re using computers, we can undertake this kind of analysis en masse, and compare the relative frequencies of every distinct word in two corpora. At first glance, this seems like a wonderfully easy way to answer the sorts of general questions we posed above: compare the relative frequencies of all words in two corpora and identify all of those which are significantly more frequent i the resulting list of keywords is a list of the differences in language use between the two corpora.
As it turns out, however, there are some real problems with this methodology. The first deals with what we interpret the results of keyword analysis to mean. So far we’ve talked about seeing differences in the frequencies of words in different corpora as evidence of language being used differently in those corpora. But this isn’t the only possible explanation for keywords. To take the example of spelled discussed above, we might explain its higher relative frequency in the written BNC as evidence that people tend to choose spelled rather than another option (in this case spelt) more frequently in written language, but we might alternatively explain it by suggesting that people write about spelling more often than they talk about spelling. This would then reflect a difference not in how language was being used, but in what it was being used for.
It turns out that keywords actually very frequently reflect just these sort of differences—differences in topics being talked or written about, the contexts in which the language use is taking place and the social roles occupied by the speakers—rather than differences in the way language is being used. And from a raw list of words generated by en masse keyword analysis, it’s very hard to know what sort of difference to attribute each keyword to.
Incidentally, we can get an indication of whether this explanation is correct for spelled by adding together our numbers for spelled and spelt to give us the frequency with which any past tense of spell is used. It turns out that that this too is a keyword for the written BNC, strongly indicating that people contributing to the BNC did indeed write about spelling more than they spoke about it. But we already had an indication that our original explanation is also correct—spelled is used much more frequently relative to spelt in the written BNC compared with the spoken BNC. So it seems that this keyword actually reflects both kinds of explanation.
The other main problem with mass keyword analysis concerns the use of the chi-squared test, but it seemed a little technical for this post. If you’re interested to read a more detailed discussion of both of these problems, possible solutions and a case study, check out my article .
Category: .
Ask Chris: Why do adult Chinese learners of English need to learn about ‘attribute’, ‘adjunct’, ‘complement’ and so on so forth? Even if we don’t know the names of these grammatical structures, we are still able to learn English well, right? I do know that these structures also exist in Standard Chinese, but we don’t need to know them in order to speak Chinese well. So what will happen if we ignore the names of these grammatical structures when we learn English?
(Note by Chris: If you are a native English speaker, you may have similar feelings when learning other languages, and please feel free to change the description to ‘adult British learner of French, German, Latin’, etc.)
Chris answers: Of course we can ignore the names of these grammatical structures – I myself might be a good example. I had no idea about the differences between the so-called ‘attribute’ and ‘complement’ and other structures before Grade 9 (the last year of junior secondary school), and I almost skipped all the relevant contents in my English class, but after ten years I can use English as my working language, well, including the seven years I spent in some English-speaking universities. But my cas most of my classmates at that time learned the definitions well and speak English well, and we can see that learning these bits of terminology could benefit the process of English learning (especially in the classroom setting in China) to some extent. So how can we explain this?
You might well have heard about Noam Chomsky. One important contribution to modern linguistics by Chomsky is his proposal of Principles and Parameters, although it is a bit old-fashioned now. To make it brief, the structures of different languages are all built up by different principles and parameters, while ‘principles’ refer to the structures that are commonly shared among (almost) all languages (for example, Binding Principle A explains why ‘David believes Mike likes himself’ only means ‘David believes Mike likes Mike’), and ‘parameters’ refer to the features possessed by only one or more languages (for example, null-subject parameter explains why an English sentence has to have a pronounced subject while an Italian sentence does not). If you are interested in Principles and Parameters,
is suitable for a beginner. Now, it has to be said, that the differences and similarities between learning a first language as a child and a second language as an adult is hotly disputed, and there is . For the sake of this post, please allow me to ignore the intricacies of this debate and focus on a more Chomskyan view.
So what do we learn if we learn the grammar of a language? Based on the assumption of Principles and Parameters, we familiarise ourselves with the general principles and the parameters of a language in the process of learning it. In case that is a second language (L2), we do not need to review the general principles anymore, so the workload is mainly familiarisation with parameters. We can also split the process into three categories: (1) preserve the parameters shared between L1 and L2; (2) activate the parameters that only appear in L2; and (3) suppress the parameters that only appear in L1. A number of theories of second language acquisition, which take Universal Grammar as their theoretical foundation, such as , indicate that these processes are exactly what we do when we learn the grammar of a language.
When we acquire our first language(s), the process of acquiring principles and parameters feels rather natural. Thanks to our (innate) language ability, we could deduce the parameters by simply paying attention to the linguistic material we receive, even if the linguistic material can’t cover every single bit of the grammar of that language, and then we can develop a sense of ‘native instinct’ to judge whether a sentence is grammatical. With this sense of ‘native instinct’, we are qualified as native speakers of that language, and we can produce grammatical sentence without considering those bits of terminology about grammatical structure, unless we are taught such knowledge at school. However, if we start learning a second language when we enter formal education, it seems we can’t develop the ‘native instinct’ for a language any more.
Since we lose the ability to learn parameters apparently as ‘effortlessly’ as we did in infancy, we must rely on more conscious inferences in the process of language acquisition. Let’s stick to the case of adult Chinese learners of English in the question, and here we have two possibilities. The first one is more like ‘acquisition’: we are in the English-speaking environment, and we organise the received linguistic materials and infer the grammar pattern by ourselves. In such an environment, even if we don’t learn the grammar systematically, we could acquire the parameters on our own. We will not access the terminology of grammatical structures like ‘attribute’ or ‘adjunct’ and thus have no idea about the formal structures of English, while at the same time we can acquire English well – just like my experience in the past years. The immersion programmes proposed by Quebec authorities, and the EMI schools (English-medium instruction) in Hong Kong are all good examples of ‘acquiring language without teaching grammatical terminology’. Clearly, this method is a kind of incomplete induction (while the acquisition of first language is a kind of complete induction), and we know ‘we can say in this way’ only after we have seen in this way, so that method requires us to experience sufficient examples, which is more possible when we are in the environment where the language is used extensively.
The second possibi that is, we first know the rules of the grammar, and then fill the words and phrases into the rules. What we can make use of is not the language materials for us to work out the parameters, but the parameters themselves. We understand that if the use of these parameters is grammatical, then sentences organised according to these parameters should be grammatical as well. The most prominent case of this second possibility is classroom learning, including translation training, grammar teaching and other methods you may have experienced at school. As we are dealing with the rules of language, we need to know every element of the grammar system, like ‘noun’, ‘verb’, ‘adjunct’ and ‘complement’ – or otherwise how can the teacher explain the complete grammatical structure to you? The introduction of terminology is a common part of grammar, or ‘parameter’, instruction, and it could benefit the process of explanation as well as memorisation. Since English does not hold any official status in China, and most students only learn the language in the school system, we cannot expect that we could acquire a language by simply receiving enough linguistic materials. Grammar instruction is therefore needed for Chinese students who only take English exams, and the situation is the same for other members of
which does not have close connection with major English speaking countries.
So the story becomes much easier now. Definitely, you can learn English without attending to the concepts of ‘attribute’, ‘adjunct’ and ‘complement’. You can try the first possibility I mentioned above: to work or study in an English-speaking environment for a while, or create an English-speaking environment around you while you are still in China. The former choice is more costly and sometimes requires you to hold some English qualifications, while the latter one is more time-consuming, especially if you are a full-time student or employed in a Chinese-speaking institute.
But there is another possibility, which is developed from the second possibility above and attempted by myself: you can give up the complicated definitions of those terminologies, and directly learn the parameters of English, in the framework of generative syntax. For example, since English is a wh-moving language, when I form a subordinate clause, I should move the wh-word to the position of CP-SPEC and create a formula in the following way:
I went to the building + They were holding the seminar [in the building] -& I went to the building they were holding the seminar [where] -& I went to the building where they were holding the seminar.
In a similar fashion, when you talk about ‘third-person singular’, I say ‘subject-verb agreement’; when you talk about ‘negation inversion’, I say ‘verb second word order from Germanic languages’; when you talk about ‘object complement’, I say ‘complex predicate’. So you can totally avoid the definitions of the grammatical structures, but at the same time you may need some knowledge of Principles and Parameters theory, which might be a bit more complicated but definitely useful.
While we recognise that the instruction of grammar is easy to implement and highly efficient for classroom settings, we should also admit that we can only learn a limited number of rules of language like this. The limit of time, the limit of teachers’ knowledge, as well as the limit of the scope of examination, lead us to learn only a subset of the so-called ‘standard grammar’, while at the same time may also lead to a prescriptive belief of language: only those rules in the textbook are correct, and others are all wrong. If we strictly follow the instruction of grammar, we are not able to learn some structures that are accepted by the native speakers but absent in the text book, like ‘to boldly split infinitives that no man had split before’. So there’s a trade-off between efficiency and actual language proficiency. Maybe now you can decide which road to take by yourself, and, good luck!
Further reading (if you like):
Category: .
The International Congress of Celtic Studies XV in Glasgow featured a discussion roundtable on the future of the “Celtic” Languages, initiated and organized by yours truly. With several experts presenting papers on the state of Irish, Scottish Gaelic, Welsh and Breton as well as some contributions on Manx and some Cornish activists present, the event did not lack in expertise – nor in interest with a healthy audience of a few dozen adding to the nine panelists. The reason for this interest lies in the fact that all of surviving Celtic languages are to some degree endangered (see ) or indeed revived, with Welsh the healthiest of the bunch with about half a million regular speakers, while Breton (which had around 1 Mio speakers 100 years ago, 40-50% of whom are estimated to have been monolinguals, speaking only Breton fluently), is close to disappearing as a native language.
Note the careful differentiation of terms used concerning the “speakers” of a language: Speakers of a minority language can be categorized in different groups, sometimes in up to eight different ones. Common ones are “traditional native speakers” and speakers of a non-traditional variant (particularly in Irish). However, if you look at the non-fully native varieties, you can see the various (often useful) distinctions. There are for instance “heritage speakers”, speakers who come from a certain (minority) linguistic background (say, Irish) based on both of their parents’ mother tongue, but due to the utter dominance of another language in the community (for example English), the language they learnt first and foremost (theoretically) is not their best language. The level they attain in their heritage language can vary considerably, depending for instance on the level of pervasiveness of the dominant language and whether the heritage speaker grew up in an emigrant context or a minority language context in the country of origin itself. Some languages exist almost exclusively as heritage languages, as it could be argued increasingly for Scottish Gaelic. Irish (as well as Welsh, and to a lesser extent Scottish Gaelic) has the added dimension of featuring a substantial amount of “new” (native) speakers, which could be seen as the opposite of heritage speakers, as well as occasional speakers of various levels of competence. New (or neo-) speakers are speakers that were brought up by parents predominantly in a language (typically a minority language) which was not their own native language, for example English-speaking parents raising their children in Irish. Interestingly, the reversed case, namely parents trying to educate their children in English despite being more or less monolingual Irish-speakers themselves especially in the 19th century and following the Irish Famine (1845-48), played a fundamental role in the decline of Irish as well as the development of Hiberno-English with its unprecedented output of great literary works around the turn of the 19th century.
Additionally, among learners there is a large group of pupils speaking Irish (or Welsh). But the transfer out of the school context, never mind the ‘holy grail’ of intergenerational transmission upon which the survival of a language is generally deemed to hinge, remain elusive. Where the transfer has generally worked is with cultural community speakers that meet in conversation groups just for a love of the language. However, with all non-fully native speakers the level and frequency of usage varies hugely. The vast majority never quite transcends a basic level or attains fluency combined with grammatical accuracy.
With this background to the discussion round, the group of panelists was carefully chosen to include representatives of both the traditional languages and the new varieties, often coinciding with different generations (referring to speakers as well as the academics representing them).
One major issue of minority language policy is the dichotomy between traditional and non-traditional speakers. In extreme cases like Breton, these two groups are partly unable (or unwilling) to communicate with each other. (Fascinatingly, this dichotomy proved to be transferred to some extent to the academics!) The problem is that in practice, it appears to be a trade-off of which group to focus on. This is not so much based only on budgetary constraints, though they may play a role, but rather on the fact that there is an identity gap between these groups, which often coincides with a generational gap. Instead of perceiving themselves as a privileged and united group or linguistic community, which does happen in theory and in some cases, of course, there is often a tendency towards growing resentment between the groups. At the core lies the double-edged sword of the issue of identity and language purism: Neo-speakers and even more so learners sometimes struggle with their linguistic identity, especially when faced with traditional speakers, who in turn feel their native language to be a possession of their own which is being tarnished by often untraditional or even entirely ungrammatical forms of learners. (Imagine almost everyone you talk to speaking incomprehensible English, while another, better medium of communication is readily available – similar to the “Whose English?” debate, only with heightened stakes due to the precarious situation of the languages.) In Brittany, this leads to volunteer native speakers often being politely declined to teach at schools to bridge the gap so as to “protect” learners from the disillusionment of their own shortcomings in Breton proficiency. Manx, on the other hand, being a dead-and-revived language without traditional native speakers, has shed that burden of the different speaker groups and hence forms more of a linguistic unit, albeit a rather small one with a couple of hundred speakers at best.
While panelists were initially at pains to deny existence of the chasm between traditional and neo-speakers, it was increasingly difficult to paper over the cracks during the discussion. The trade-off of traditional language purism and getting more speakers to actually use a language (through their own choice as well as by being given the opportunity by other speakers and the state’s language policy) until we find a solution for the identificational and speech-generational gap.
(Friday, July 31st) will address similar topics.
Category: .
If you ask a primary school child what a poem is, you might get a reply as simple as “words that rhyme.” However, as adults we know that poetry is far more complex and that it can take many different forms. If we were to limit our understanding of poetry to simply “words that rhyme” we would miss out on whole swathes of English-language poetry but also on poetic forms in other languages, such as Haikus or . If we understand that spoken language poetry is not exclusively about rhyme, then we should also acknowledge that poetry itself is not exclusively spoken or written. In this post, I will briefly write about some features of sign language poetry and compare these to spoken language poetry.
Rhyme is probably the first thing that pops into your mind when you think about poetry. Perfect rhyme (like that between ‘plate’ and ‘date’) is the most obvious, but it is just one of a number of possible rhymes. For example, poets frequently use assonance (words sharing a vowel sound, like ‘purple’ and ‘curtain’), consonance (words sharing consonants, like ‘bitten’ and ‘better’) and alliteration (words with the same initial consonants, like ‘shiver’ and ‘shake’). They can use these rhymes to create a certain effect, for example using a series of fricatives (like [s], [z], [?] or [?] in English) to evoke the sound of the sea. Rhyme essentially employs phonological features to create artistic effect. The same is true in sign language poetry. In my , I discussed sign language phonology. Sign languages exploit phonological features to create rhyme in the same way spoken languages do. Poets create rhyme by using a series of signs that share one or more phonological features (handshape, location, movement, orientation and nonmanual features). For example, a BSL poet might describe a scene of snow falling in a forest whilst a deer walks past a log cabin with a fire inside. This would be a series of rhymes in BSL because , ,
all use the same handshape (you can see rhyme with repeated use of a flat handshape in Walter Kadiki’s ‘Butterfly Hands’, below). Similarly a poet might use a series of signs with the same movement or the same location to create rhyme. Related to rhyme is the frequent use of symmetry in sign poetry. Symmetry is found when both hands have the same handshape and their location mirrors each other, like is found in the BSL signs
I am sure if you think back to the first poem you ever analysed at school you would remember having to mark stressed and unstressed syllables and count how many feet (groups of syllables) there were in a line. You may remember being told how sonnets are written in iambic pentameter supposedly to evoke the rhythm of a human heartbeat. Clever use of rhythm in spoken poetry can create a variety effects, such as echoing the canter of the cavalry in Tennyson’s ‘The Charge of the Light Brigade’ or the sound of a steam train thundering along railroad tracks in Auden’s ‘’. Similarly, sign language poets can alter the speed and stress of signs to create a certain rhythm. For example, a poem about lying in the sun might have slow languid movements but a poem about running away from a tiger would have sharp hurried movements. Watch Jolanta Lapiak’s ‘The Moon in my Bedroom’ to see how she uses rhythm to create a relaxing night-time scene.
Literary Devices
Sign language poetry is capable of employing all the same literary devices as spoken language poetry. For example, poems often have allegorical meanings, especially related to how Deaf culture is treated by the hearing population. Another literary device often employed in sign poetry is anthropomorphism as it is possible for a signer to role shift and ‘become’ a particular creature in the narrative (see this device being used in Richard Carter’s poem ‘Deaf Trees’, below). Sign language poems can also use irony, hyperbole, understatement and a whole host of other literary devices to relay their message in visually striking way.
The wonderful thing about this sign language poetry is that even if you do not know the language being used, you can still appreciate the visual imagery and decipher a certain amount of meaning. I hope that this post has encouraged you to go out and explore this dynamic medium. If you are interested in more of Richard Carter’s poetry, visit his . If you would like to learn more about Jolanta Lapiak’s work, see her . For more insight into sign language poetry, look for Dr Rachel Sutton-Spence’s wonderful book (written with Paddy Ladd and Gillian Rudd) ‘‘ (Basingstoke: Palgrave Macmillan, 2004).
Category: .
Recently I was at an excellent
workshop on methodology in Berlin. One of the themes that kept cropping up was the need to ‘air our dirty laundry’ – to share the studies that didn’t quite work out the way we expected, that maybe told us nothing at all (apart from the fact that we’d come up with a not-so-great design), that certainly won’t be published. But that doesn’t mean they were a waste of time, because by learning from them and not keeping those lessons to ourselves we can make a – perhaps teeny tiny but not unimportant – contribution to progress in our corner of Linguistics (or wherever you happen to find yourself in academia).
So here’s my contribution to this mission.
As I mentioned , my PhD research is (partly) about how children develop the ability to make pragmatic inferences, and particularly what us linguists call implicatures – meaning that the speaker implies (or the hearer infers) beyond the literal meaning of the speaker’s utterance. Here are a couple of classic cases (where +& indicates implicated meaning).
Bob: Did you meet her parents?
Barry: I met her dad.
+& I met only her dad (and not her mum)
Bob: Did you eat the cookies?
Barry: I ate some cookies.
+& I ate only some cookies (but not all of them)
Now, the crucial thing that linguists who would identify as some sort of
would maintain, is that these inferences don’t take place in a vacuum without any regard for who the speaker is, what he’s l rather, the hearer (and speaker) pay attention to the context, what she and the speaker mutually know, whether the speaker is co-operative (truthful, informative and using ‘normal’ language for the situation), and knowledgeable about what they’re saying. On that basis, the hearer makes some sort of inference about the speaker’s intended meaning:
Conversely, if any of those assumptions (1, 2, 6) aren’t met, then the implicature won’t go through, or so the story goes.
Indeed, a few studies have shown that knowing that the speaker is at least partially ignorant about a situation they’re describing reduces the rate at which adult hearers make such inferences. For example, the difference between:
At my client’s request, I meticulously compiled the investment report. Some of the real estate investments lost money.
At my client’s request, I skimmed the investment report. Some of the real estate investments lost money. (Bergen & Grodner, 2012)
has an effect on the reading speed of critical segments (from which the presence of an implicature inference in the first but not the second case can be deduced).
We also know that young children are sensitive to how reliable speakers are when learning new words (which arguably might involve some sort of pragmaticky inference too). So what my study aimed to do was to find out whether children, in my case 5-year-olds, would be sensitive to the speaker’s co-operativity and whether this would affect the rate at which they made implicature inferences.
First of all children were introduced to a character, lets call her Sally, and listened and watched as she showed herself to be both an under-informative and irrelevant speaker or hearer.
Then the children listened to some more stories about Sally, with the experimenter (aka me) telling the story, and Sally ‘interrupting’ with each critical sentence which could implicate something. Their task was to pick the picture that went with the story for each of these sentences.
The hypothesis was that if children notice and take account of the fact that Sally is unco-operative, then they would not infer the potential implicatures and choose the picture that reflects an implicated meaning, while picture-choice for straightforward control sentences, where only the literal meaning is available, should be unaffected1.
So what happened? Kids certainly noticed (sometimes with a bit of experimenter prompting) that Sally was an odd communicator. When they had to choose a picture based on something Sally had said that was underinformative and did not distinguish between the two options available, they were rightly stumped. But when it came to the test phase, where one picture displayed the implicated meaning, and one the literal meaning, they seemed to forget all about this. Or rather, they seemed to be very relieved that now they could get on with their task without the puppet causing problems! Only in one case did a girl decide that Sally must persistently mean something different to what she said, but this was applied blanket to everything including the control items.
Why did this happen? Does it tell us something about how kids are different from adults (e.g., not able to keep track of speaker traits because of shorter attention span or memory)? Unlikely. I think a more probable explanation here is the task: the children were told to choose a picture that matched the story and that was their main goal. So they would use any strategy to achieve that, even if that meant disregarding things they knew about the speaker to derive a pragmatic inference ‘as per normal’. Prior experience, and therefore expectations, may have played a role too: for example, quite a few children were very clear that ‘some’ means ‘not all’ (contrary to the view of pragmaticians in this field).
The lesson here is to think about the experience of the participant in the task, and how the goals of the task interact with principles of communication, like Grice’s co-operativity. Are they in concert or are they opposing forces that the listener-participant has to resolve? Is it (just) something linguistic that we’re asking participants to do, or so some higher level or conscious reasoning?
As for me now, it’s back to the drawing board.
1. More specifically, in one version, there was a training phase with 6 items in which the character was either an unco-operative speaker or an unco-operative hearer, in stating or inferring under-informative or irrelevant content. There was then a test phase with 2 stories (4 in the full version), each containing 8 items, 3 of which tested scalar, ad hoc and relevance implicatures, 3 of which were controls, and 2 of which were under-informative (as a reminder of the character’s unco-operative nature).
References
Bergen, L., & Grodner, D. J. (2012). Speaker knowledge influences the comprehension of pragmatic inferences. Journal of Experimental Psychology: Learning, Memory, and Cognition, 38(5), 1450.
Sobel, D. M., Sedivy, J., Buchanan, D. W., & Hennessy, R. (2012). Speaker reliability in preschoolers’ inferences about the meanings of novel words. Journal of Child Language, 39(01), 90–104.
Category: .
Tagged: , , , , .
If you’re interested in language, chances are you’ve wondered about things like how we put bits of language –sounds, meanings and words– together to create some larger expression of communicative meaning. In other words, you’ve probably wondered at some point about how to make a sentence. If you’re been particularly keen and tried to read up about how linguists examine sentences, you’ve probably come across a bunch of funny-looking, often intimidating diagrams known as ‘syntax trees’. Unsurprisingly, many people are put off by the perceived complexity of a syntax tree, and are thus unable to go much further in their quest to understand how we make sentences. This post aims to resolve this problem by showing you the basics of how to grow your own syntax tree.
Why would I want to grow my own syntax tree?
Trees are important because they help us to understand how (and perhaps even why) we put linguistic items together when creating a sentence, since the pattern underlying a sentence isn’t necessarily the same as what we see on the surface. For example, Groucho Marx’s famous line ‘I shot an elephant in my pyjamas’ is dependent on two different syntactic structures for the humorous double meaning, enabling the one-liner ‘and how he got into my pyjamas I’ll never know’. If you can ‘grow’ your own sentence tree, you’ll be able to map out the patterns, and therefore uncover the underlying differences in these structures, for yourself.
Sowing the seeds: the basics
A syntax ‘tree’
Before we start, it’s worth remembering that the theory behind growing a syntax tree is stil although linguists tend to agree on some fundamentals, there’s not necessarily a right or a wrong way of sowing the seeds and pruning the tree. Here, our basic syntax tree has the following: a verb-based “root”, a tense “trunk” and a sentence “crown”.
Our tree is anchored by its roots: the verb from which the rest of the sentence grows. A verbal ‘root’ (“VP”; ‘P’ stands for ‘phrase’) in English could be: grow, inspect, calculate, shake, wriggle.
Of course, we can’t have a tree without a trunk: likewise, we can’t have a verb unless it is appropriately modified to illustrate tense (or similar inflection). To illustrate tense (“TP”), we might need to modify a verb to show that is an infinitive, e.g. to grow, to inspect; or a present, past or future tense: (he) shakes, calculated, will wriggle.
Tree Structure template
Finally, just as a crown tops off a tree, in a syntax tree, the ‘crown’ (“CP”) tops off the structure and tells us (amongst other things) what type of sentence we’re dealing with. Question words (e.g. ‘how many?’) and subordinators (e.g. ‘that’, ‘if’) go here, indicating ‘interrogative’ and ‘embedded sentence’ respectively. For the basic trees we’re growing here, we don’t need a CP, but in a real-life sentential forest, you’d of course want to sow sentential seeds that will grow into different types of sentences.
From seeds to sapling: your first syntax tree
All units are grouped together in twos, and are represented by a binary ‘branch’ (a triangle without the bottom line) in the tree. The more our sentence grows, the more the branches on our tree grow.
‘Tense’ and ‘Verb’ slots filled in
The three-level structure CP-TP-VP gives us our core sentence (or ‘tree’) template, but you’ve probably noticed there’s something missing: the ‘actors’ that take part in the ‘scene’ described by the verb, e.g. she will eat a cake. Now, in the surface structure, she is higher than the tense (will) and the verb (eat), but, as you might agree, the participants are a pretty crucial part of the sentence. A participant doesn’t denote sentence type (CP) or tense (TP). Instead, sentence participants are involved in telling us who does what to whom.
We already know that the ‘doing what’ is illustrated by the verb (i.e. ‘doing’ is the action, and ‘what’ is the actual meaning of a verb), anchoring the ‘root’ of our sentence. It stands to reason that the who and to whom also anchor the sentence within its ‘roots’. Indeed, there is a lot of cross-linguistic evidence for this, but all we need to know for now in order to grow our tree is that the participants – the subject (the “who”) and the object (the “(to) whom”) – originate at the roots of our tree, too.
Since all units are grouped together in twos, we next need to work out what groups together with what: in a sentence like ‘she will eat a cake’, the verb can only group first with either the subject (‘she’) or the object (‘a cake’). This initial grouping can then group with whatever’s left over, to form a larger unit. To work out what groups with what, we can ask the following questions:
Question: What will she do?
Answer: Eat a cake. ?
Question: What will happen to the cake?
Answer: She will eat.
Answer: She will eat it. ?
From the above questions, we can tell that ‘eat a cake’ (i.e. verb + object) form a complete group, whereas ‘she will eat’ (i.e. subject + verb) do not. We need to substitute in ‘it’ to complete the latter phrase, i.e. we need to put an object in to make it work. This suggests that the verb + object combination is our first grouping because it can stand alone (subject + verb cannot). The subject + (verb + object) combination must be our second grouping. Indeed, ‘she will eat a cake’ can stand alone as a functioning phrase, as exemplified by the following question:
Question: what will happen?
Answer: She will eat a cake.
Our sentence’s root structure must look like this:
Of course, ‘she’ is now in the wrong place for the sentence we are growing (note that we have grown a simple question though!). We must therefore look for somewhere where ‘she’ can move to in order for the sentence to make sense. There is only one slot remaining: just above ‘will’. And that gives us the correct surface order:
And there you have it. We have grown a syntax tree! Why not have a go and see if you can grow your own simple sentences? Here are some you might like to try:
He will dance the tango.
She would play a game.
We have read a book.
She shall have music.
You have met my mother.
Groucho had shot an elephant.
Category: .
Tagged: , , .
In 2000, Jonathan Harrington and his colleagues at Macquarie University in Sydney wrote a series of publications on the Queen’s English. Literally. They compared a number of vowel sounds produced by the Queen in her annual Christmas messages from the 1950s to the same vowel sounds produced in the 1980s, and used female BBC broadcasters speaking standard Southern British English (SSBE) as a control group. The idea was to observe whether the Queen’s speech had changed over those 30 years, and whether it had moved closer to the English used by the control group. Their results indicated that not only had the Queen’s English changed quite substantially, it had changed in the direction of – though not reaching – the standard English produced by news broadcasters in the 1980s. Conclusion: the Queen no longer speaks the Queen’s English of the 1950s.
The articles, of course, sparked . But is it really so strange that the Queen’s speech has changed? Firstly, with age, physiological changes to the vocal chords and vocal tract inevitably lead to changes in the voice. So the Queen’s pitch is physiologically speaking bound to have been lower in 1987 than when she was 30 years younger. Similar changes to the resonances of the vocal tract would have influenced the measures taken by Harrington and his colleagues. And secondly, language itself is not a stagnant entity. The way English is being spoken in the UK changes over time, as does the speech of smaller speech communities such as the royal family. Not even the Queen’s aristocratic English is immune to this tendency.
Does that mean the Queen will eventually end up sounding like the rest of us? The answer is, in all likelihood, no. While her speech in the 1980s does not sound quite as cut-glass as the broadcast from the 1950s, it still sounds unmistakably upperclass. Think of it this way: both her English and the SSBE English of the middle-class public are changing, so although her vowels are likely to continue to move towards Harrington et al.’s 1980s SSBE targets, the rest of us have long stopped sounding like that. In other words, she will most likely continue to speak the Queen’s English, it’s just that the Queen’s English, like any other language variety, is not likely to stay the same over time.
So what exactly has changed from the 1950s to the 1980s? If you listen to the two YouTube clips below, you’ll notice a wealth of interesting phonetic phenomena. For instance, in the clip from 1957, notice how she says the word “often” (/?:f?n/, or orfen, around 0:55 in the clip), whereas in the 1987 Christmas message has her saying something closer to /?f?n/ (or ofen) in the word “often” (at 2:33). Similarly, in the early clip her /u:/ vowel in “you” and “too” is very back, whereas in the later clip it’s more fronted, that is, closer to the vowel the rest of us are likely to produce. Another interesting feature to look out for is the second vowel in the word “happY”, which is produced like the vowel in “kit” in the early clip (e.g. “historY” at 1:22), but closer to the /i:/ vowel in the word “fleece” in the later clip. This latter point is further described and discussed in a later paper by Harrington and his colleagues ().
If you’re interested in reading more on the Queen’s English, , and .
(Thanks to Adrian Leemann, who presented Harrington et al.’s work at our Phonetics and Phonology reading group, thus providing the material for this blog post).
Category: .
Recent Posts
Categories

我要回帖

更多关于 tendzone 的文章

 

随机推荐