|
|
PERSPECTIVE ARTICLE |
|
Year : 2021 | Volume
: 40
| Issue : 1 | Page : 3-17 |
|
A critical review of phonological theories
MN Hegde
Professor emeritus, California State University-Fresno, Fresno, United States
Date of Submission | 18-Nov-2021 |
Date of Acceptance | 10-Mar-2022 |
Date of Web Publication | 06-Sep-2022 |
Correspondence Address: Prof. M N Hegde California State University-Fresno. United States
 Source of Support: None, Conflict of Interest: None
DOI: 10.4103/jose.JOSE_7_21
This is a critical review of two major phonological theories: linear natural phonology and the nonlinear optimality theory. Natural phonological theory asserts that phonological processes are phonetically based. Phonological error patterns help organize treatment targets and assess generalization. However, the natural phonology’s explanation of speech sound learning in children does not attain the status of a scientific theory. Process proliferation and poor definitions are other limitations. Optimality theory proposes that speech sounds may be marked (complex, more difficulty to produce, etc.) or unmarked (simple, easier to produce, etc.). Optimality replaces rules with markedness and faithfulness constraints. Constraints are common to all languages, but their ranking are unique to each language. Speakers can violate constraints ranked lower, but not those ranked higher in their language. When speech is imminent, GEN the generator generates a variety of output (response) options and EVAL the evaluator selects an optimal output that is faithful to the higher-ranked constraints. There is no independent evidence for the existence of universal and innate constraints, specific language-based rankings, and the operation of GEN or EVAL. Assumptions of universality of phonological rules and even the existence of such rules are speculative. That children have innate phonological knowledge is an untenable assumption. Most generative phonological theories have little or no empirical validity. Investigations of child-directed speech, statistical learning, implicit learning, sociolinguistics, usage- and exemplar-based phonology and behavior analysis have all supported the view that children master their speech sounds (and language structures) through social interactions. Keywords: Phonological error patterns, linear natural phonology theory, nonlinear optimality theory, phonological theories
How to cite this article: Hegde M N. A critical review of phonological theories. J All India Inst Speech Hear 2021;40:3-17 |
Phonology is a study of speech sounds and the rules that dictate the formation of sound sequences in forming syllables and words. The root of phonology goes back to Panini, the Indian Sanskrit grammarian of the 5th century (Cardona, 1998; Shukla, 2006). Phonology as the study of a mental and innate sound system and the rules that govern that system is a product of the 20th century.
Phonology is linked with phonetics, which is the science of speech sound production and classification. Speech articulation is a phonetic event. Both phonology and phonetics study certain common factors of speech sounds. For instance, both are concerned with the description of speech sounds, sound sequences, and sound patterns that result when speech is produced. A major distinction is that phonology is concerned with abstract rules and knowledge that govern the production of speech sounds. Phonetic rules are grounded in speech physiology and acoustics; hence they are empirically observable and measurable. Phonological rules are a part of mental and unconscious knowledge; hence they are abstract and not directly observed. Phonetics is descriptive and experimental, whereas phonology is theoretical.
In speech-language pathology (SLP), the value of phonetic study of speech sounds is well-established and devoid of controversy. Speech-language pathologists (SLPs) appreciate the need to understand the physiological mechanism of speech sound production as well as the physical (acoustic) properties of speech sounds produced and modified in the human vocal tract. The value of phonological theories that entered SLP in more recent times, however, is debatable. Therefore, this paper offers a critical review of two major phonological theories and their relevance to an understanding of speech sound disorders in children.
A prototype of an innate mentalistic approach to language that began to influence SLP in the 1960s was Chomsky’s (1957) theory of universal grammar. Subsequently, Chomsky and Halle’s (1968) distinctive feature theory influenced the analysis of speech sounds and speech sound disorders. However, since the advent of newer phonological theories, the distinctive feature analysis has tapered off in SLP. Therefore, this review will be limited to currently influential phonological theories.
Natural Phonology: A Linear Theory | |  |
In a linear phonological theory, phonemic segments are independent of each other, not hierarchically organized, and form a linear string of segments. A segment may be a sound, a combination of sounds, or a unit that is more abstract than a sound (e.g., the sonorant quality of a sound). Examples of phonemic segments include such properties as vocalic, sonorant, low, nasal, voiced, and so forth. Chomsky and Halle’s (1968) distinctive feature theory is a classic and standard linear theory in which phonemic segments are a bundle of independent features that may combine with any other segment. Children have an inner level of mental representation of speech sounds from which they derive the outer level of surface productions. To translate mental representations to speech production, children apply the rules sequentially (i.e., linearly), one at a time, not simultaneously.
Phonological Processes | |  |
In their Natural phonology or natural phonological theory (NPT), Stampe (1979) and Donegan and Stampe (1979) proposed that to learn their speech sound productions, children simplify adult productions. Such simplifications are phonological processes that may affect an entire class of sounds sharing a common articulatory difficulty. Simplifications result in speech sound errors in the context of adult models, but those errors are unlearned because they stem from phonetic-physiological limitations. Learned speech sound errors cannot be attributed to a natural process (Donegan and Stampe, 1979). In SLP, the currently preferred term is phonological patterns, but I shall continue to use the term phonological processes because that is the term in the theory.
The theory is called natural because the children’s simplifications of adult sound productions are due to their phonetic (speech production) limitations. Because children learning different languages simplify the adult production in similar ways, Stampe proposed that phonological processes are both universal and natural. NPT retains the Chomskyan assumption (Chomsky, 1995) of innately given adult phonological system that children are supposed to possess. However, in contrast to the Chomskyan theorists, natural phonologists believe that children do not follow some kind of rules in learning to produce their speech sounds. Processes are not abstract cognitive or mental rules, but they are a product of phonetic or physiological limitations of young children trying to master speech sounds. Children’s speech improves as their speech production mechanism becomes more competent and their productions better match the adult models. Consequently, the simplification processes fade.
Phonological Processes vs. Rules | |  |
Phonological processes are unlearned, innate, involuntary, and natural and work at an unconscious level. Children cannot verbalize the process they exhibit. Rules, on the other hand, are not natural because they are not based on physiological (phonetic) limitations. Most language rules are characteristics of dialects of a verbal community, and hence are learned. Learned rules may be verbalized. Americans pronounce the word pentagon as [pεntagαn] and the British pronounce it is [pεntəgən]. Both are instances of dialectal learning, not a matter of phonetic limitations of the speakers, and hence not phonological processes. Most speakers in either dialect (American or British) may be able to describe the rule of how pentagon is pronounced in their dialect. However, a child who says [top] for stop is not following a rule. Given the child’s phonetic limitations, it is a natural phonological process of cluster simplification, not a learned response. The child cannot verbalize the process of cluster reduction (Donegan and Stampe, 1979).
Having rejected phonological rules, the NPT proposes phonological constraints, which are restrictions a language imposes on a phonological process. Constraints force children to overcome phonological processes. For example, many typically learning English speaking children may delete the final consonants. Natural simplification as it is, the final consonant deletion process has a constraint on it: there shall be final word consonants in English. (It may be noted that such constraints are not universal; words in Spanish, Vietnamese, and many other languages have few or no final word consonants.) Because of this constraint, typically developing children have to master the production of final consonants and thus eliminate that process.
Phonological Error Patterns | |  |
Whether a phonological process is normal or of clinical interest depends on the normative information. Natural phonological processes become of clinical interest only when a child who, based on normative information, is expected to produce the sounds correctly. The process of final consonant deletion, for example, is typical (hence not a disorder) in a 2-year-old but atypical (hence a disorder) in a 6-year-old. Children who misarticulate sounds their peers produce correctly have a speech sound disorder. Main phonological processes and the ages at which they are expected to disappear are shown in [Table 1].
NPT asserts that children suppress phonological processes as they become more proficient in correct speech sound productions. Children who did not suppress their phonological processes beyond the expected age ranges have a phonological disorder.
Evaluation of Natural Phonological Theory | |  |
Although some clinical researchers have investigated the nonlinear optimality theory, SLPs continue to use phonological processes of NPT in assessing children with speech sound disorders (Peña-Brookes and Hegde, 2022). Henceforth I use the preferred term phonological patterns instead of processes. A vast majority of clinicians diagnose phonological disorders in children on the basis of the age of disappearance of phonological patterns [see [Table 1]].
NPT offers several clinical advantages to SLPs. Stampe’s (1979) basic claim that phonological error patterns stem from a still developing speech production mechanism makes practical sense. Clinicians intuitively believe that articulatory errors in typical language learners are due to an immature articulatory system. Clinicians find NPT’s claims more acceptable than the generative theorists’ claim that some deeper mental processes cause speech sound disorders. Speech sound developmental data have documented a steady decline in speech sound errors or patterns of errors and a corresponding increase in sound production mastery as children grow older and presumably gain greater control over their speech production mechanism (see Peña-Brookes and Hegde, 2022 for a review and evaluation of studies). Furthermore, availability of phonological pattern (process) assessment tools have facilitated the application of NPT in speech sound disorders. Phonological error patterns are a convenient way of grouping multiple errors that may seem bewilderingly scattered. SLPs may monitor the treated error patterns for generalized correct productions during treatment.
As a scientific hypothesis, the proposal that phonetic (physiological) limitations of typically learning children cause speech sound errors appears plausible. At a minimum, speech sound learning in children is correlated with age; age is correlated with improved speech production skills (see Peña-Brookes and Hegde, 2022 for a review of research). That children’s phonological processes disappear as they become more proficient in sound production is also consistent with the developmental data and common sense.
In spite of its reasonable assumptions about the origin and disappearance of phonological patterns in children, the NPT’s has significant limitations. Its status as a scientific theory is questionable. NPT simply restates what investigators of speech sound learning in children have stated since Wellman et al.’s 1931 publication: As children grow older, (1) their speech production mechanism matures, (2) correct production of speech sounds increases, and (3) the articulatory errors decrease. This is hardly a scientific theory. Most parents of typical speech sound learners will have concluded that much. It is akin to the statement that as children’s motor skills improve, they walk better. No one takes this as a serious scientific theory of walking.
SLPs have uncovered other limitations of the NPT. A basic problem for the clinician is the ever-expanding number of poorly defined phonological processes or patterns. Researchers disagree on the definition and number of patterns that exist (Miccio and Scarpino, 2011). A few to more than two dozen patterns have been described. Even with a large number of patterns recognized, some errors of children remain unclassifiable. Distortions, a common category of speech sound disorders, does not fit any pattern. Errors that are more complex than the adult model are puzzling because by definition, patterns are simplifications. For instance, a child’s production of yak as [ræk] is not a simplification of the adult model (yak is thought to be simpler than ræk). Few errors that many children exhibit do not form patterns. The typical suggestion that fewer errors are a motorically based articulation disorder whereas multiple, pattern-forming errors are a phonological disorder begs a crucial question. Do fewer errors (mild articulation disorder) have different causes than multiple errors (severe disorder)? Perhaps not. An affirmative answer suggests an unlikely causation of disorders. Generally, compared to a mild medical or behavioral disorder, a more severe disorder may have more of the same kind of causal variables. Therefore, the very distinction between articulation and phonological disorders is questionable.
The existence of unique patterns across languages and in children speaking the same language suggests that patterns are not universal. Initial consonant deletion, uncommon in English, may be fairly common in French, for example. Also, some children exhibit patterns that their peers do not (Yavas, 1994). The NPT cannot explain such unusual patterns. Within the theory, clinicians have no way to classify vowel errors, although others have suggested ways of classifying them (Ball et al., 2010; Reynolds, 1990).
NPT does not adequately explain how children learn to produce their speech sounds. The statement that to master speech sound productions, children should suppress phonological processes does not explain learning. But how do children suppress natural processes? In fact, NPT does not need a suppression construct. If processes are errors due to an immature speech production mechanism, then a more mature mechanism will help produce the sounds correctly. Nothing needs to be suppressed. (Still, the other learning variables will have to be specified.) Bernhardt and Stemberger (1998) have found the construct of process suppression absurd. They suggest that the construct is equivalent to proposing that children learn to walk only when they suppress a falling down process. Bernhardt and Stoel-Gammon (1994) point out that positive progression, not negative suppression, characterize skill mastery.
The need for constraints that replace phonological rules is also unclear. Children do not learn to produce word-final consonants because of a constraint there shall be final word consonants in English. When they are phonetically capable, children begin to produce word-final consonants simply because they hear them in the speech directed to them.
NPT also shares some general limitations with other phonological theories. In a later section, I return to a general evaluation of phonological theories.
Optimality: A Nonlinear Theory | |  |
As noted, the classic Chomskyan assumption that phoneme segmentals are a bundle of independent features that may freely combine with each other is a linear concept. Rejecting that concept, several later theorists have built nonlinear models in which segmental and suprasegmental aspects of phonemes (and speech) are organized into hierarchically arranged tiers, not as linear strings. A nonlinear tier may be created to account for a speech phenomenon that linear theories fail to address. For instance, Chomsky and Halle (1968) did not handle tonal variations in their theory. Therefore, Goldsmith’s (1979)autosegmental nonlinear theory inserted a tonal tier just above the sound segments. Another nonlinear theory, the metrical theory (Goldsmith, 1990; Schwartz, 1992) is mostly concerned with the syllable structures, stress patterns, and rhythms of speech, all inadequately handled in the Chomsky-Halle theory. None of these nonlinear theories have made significant inroads into SLP. The one that has been applied to some extent in analyzing speech sound disorders is the optimality theory.
In addition to a hierarchical notion of phoneme features, optimality and all other nonlinear theories share certain common assumptions. Most nonlinear theorists (1) redefine mental and surface representations of the Chomskyan linear theories as input and output representations; (2) claim to have abandoned phonological rules and processes in favor of phonological constraints that are universal and innate: and (3) assume that in the minds of children, there is a representation of the sound system which constitutes an innate universal phonological knowledge.
Prince and Smolensky (1993) and McCarthy and Prince (1994) are the original proponents of optimality theory (OT). But first, what is optimality? Assume that a child gets a speech input. (She heard her brother say “cat,” for example.) Now the child is ready to exhibit an output. (She is ready to imitate her brother or give a verbal response.) But for reasons I will specify later, the child will encounter numerous output options, including the word cat, an approximant of that, and many that are totally different. (The child could say any number of different words or syllables when trying to imitate or produce a specific word.) A mechanism in the child’s mind will select one of those countless options as the most optimal to produce. (The child said something.) That optimal output is the winner, all other potential outputs are losers, and hence the name optimality.
In OT, phonological constraints control speech production. Constraints are innate and universal specifications of acceptable (and unacceptable) sound sequences in languages. Speakers generally obey these universal constraints. Whereas the rules of the Chomskyan universal grammar (UG) invariably apply to all languages, OT’s universal constraints are ranked differently and uniquely in each language. Across languages, the same constraint may be ranked higher or lower.
Constraints are innate, so children need not learn them. They only have to learn how the constraints are ranked in their language, because the rankings are not innate. Unlike the inviolable Chomskyan universal grammar rules, OT’s constraints may be violated or obeyed because even though they are universal, a specific constraint in a language may be ranked so low that it may be violated without consequences. There are two major sets of constraints: faithfulness and markedness.
Faithfulness | |  |
The faithfulness constraint requires that a child’s output (word productions) should be the same as its underlying input representation. Identity or similarity between inputs and outputs is faithfulness. In common terms, correct productions of sounds and words are faithful. If the mother said “cat” and the child said “cat,” then the child’s correct production is faithful to the adult model. When outputs differ from inputs, the faithfulness constraint is violated. Then the child’s production is a mismatch, not faithful, incorrect. Depending on the age of the child, mismatches may be grounds for diagnosing a phonological disorder. A few major faithfulness constraints are described in [Table 2].
Faithfulness constraints are straightforward in their meaning and application. When inputs and outputs match, speech sound productions meet the faithfulness constraints, and are judged correct. Markedness constraints are not this straightforward in their meaning or implications.
Markedness | |  |
Both the distinctive feature theory (Chomsky and Halle, 1968) and the natural phonological theory (Stampe, 1979) had accepted the concept of markedness that Trubetzkoy (1969) and Jakobson (1968) had originally introduced. Markedness now is a mainstream linguistic concept, not limited to OT (Haspelmath, 2006). However, it is a dominant concept in OT.
Originally, marked meant the presence and the unmarked meant the absence of a specific feature in a phoneme. For example, /g/ is marked for voicing (+ voice) and /k/ is unmarked for it (– voice). Such a binary system was in the distinctive features theory (Chomsky and Halle, 1968). Currently, the meaning of markedness varies across theories, but in OT, it means sounds that are (a) complex, (b) difficult to produce, (c) not natural, (d) infrequent in languages, (e) abnormal, (f) unpredictable, (g) later-acquired, (h) language-specific, and (i) perceptually weak. Being the opposite of marked, unmarked sounds are (a) simple, (b) easy to produce, (c) natural, (d) frequently occurring in languages, (e) normal, (f) predictable, (g) early acquired, (h) universal, and (i) perceptually strong (Haspelmath, 2006; Hume, 2011). Natural are the features present in most if not all languages; unnatural are those present in only a few languages. More easily understood sounds are perceptually strong and difficult to understand are perceptually weak.
In OT, markedness is a set of universal and innate constraints that presumably apply to the languages of the world and help explain speech sound errors (Dinnsen and Gierut, 2008). Markedness constraints are counterintuitive in that following them is a disorder, violating them is normal. A few major constraints are listed in [Table 3].
Languages share segments and segmental sequences more or less commonly (Yavas, 1994). For instance, features that are found in most if not all languages include (1) voiced and voiceless sounds; (2) stops, especially voiceless stops; (3) voiceless fricatives more so than voiced ones, especially the dental-alveolar /s/; (4) nasals; (5) the vowel /α/; and (6) unrounded front vowels (e.g., /i/); and (7) CV (consonant-vowel) syllable structure, among others. In OT, vowels, glides, nasals, and stops are unmarked; they are natural, universal, innate, and do not need the environmental input because the child already “knows” about them (has a mental representation).
Studies on speech development have shown that children master the unmarked and natural features sooner than the marked and the less natural. For example, mastery of unmarked (more natural) unrounded vowels precede that of marked rounded vowels. Children master the more natural (less marked) stops earlier than less natural stops (see Peña-Brooks and Hegde, 2022 for a review of studies).
Marked features, on the other hand, require environmental input and learning. These include word-final consonants, consonantal clusters, fricatives, affricates, and liquids—all less natural, more complex, and unique to languages. Misarticulations also are more common on less natural and more unique sounds than on more natural and universal sounds.
Faithfulness and Markedness Interactions | |  |
Constraints are in a constant state of tension because they oppose each other. Speech production is a balancing act between these two opposing forces. Speakers cannot satisfy the opposing constraints simultaneously. When one constraint is fulfilled, another is violated. When one constraint is violated, another is fulfilled. Because conflicts are unavoidable, grammar aims to (a) reduce the number of conflicts and (b) keep violations to less serious (lower-ranked) constraints. OT postulates that an optimal output form (i.e., the eventual utterance) is the best among the available choices and violates fewer and lower-ranked constraints.
Unique ranking of universal constraints in a language distinguishes it from all other languages. For example, consonant singletons are simple, easy to produce, and hence unmarked and universal whereas consonant clusters are complex, difficult to produce, and unique to certain languages, including English, and hence marked. But the markedness constraint *COMPLEX (do not produce clusters) is ranked low in English because it has such clusters but high in Fijian because it has no clusters. English language speakers may violate the lower-ranked do not produce cluster constraint so they can produce the clusters. The Fijian speakers may not violate the higher-ranked *COMPLEX; they must be faithful to it. Essentially, it is the ranking of constraints, not the constraints per se, that will dictate whether certain features get produced or not (Barlow and Gierut, 1999). Productions that violate the lower-ranked constraints in the speakers’ language will be acceptable (optimal); those that violate the higher-ranked constraints will not be acceptable (non-optimal).
Vowels must not be nasal is another markedness constraint. This universal markedness constraint quickly ran into trouble because several languages do have nasal vowels. English does not, except in assimilation, and therefore, the constraint is ranked high and English speakers cannot violate it, should be faithful to it. In common terms, English speakers should not nasalize their vowels, and in fact, they typically do not. However, in Kannada or Hindi (two languages of India), some vowels are nasalized. Therefore, the markedness constraint vowels must not be nasalized is ranked low in Kannada, Hindi, and many other languages with nasalized vowels. Speakers of these languages, thanks to the low ranking of the constraint by the English phonologists, are permitted to violate the constraint and produce nasalized vowels.
GEN and EVAL | |  |
In OT, two mental mechanisms are operative in speech-language production. When there is an input to a child (i.e., the child heard a word, say, elephant), there is a GEN (the generator) that creates a variety of output candidates the child can select from. The generated outputs maybe optimal (accurate or nearly so) or nonoptimal (less accurate or outright wrong).
The second mechanism, EVAL (the evaluator) scrutinizes the candidates that GEN has generated and selects a winning and optimal output that succeeds (gets said). The winning candidate adheres to higher-ranked constraints, violates the least number of constraints, and violates only the lower-ranked (less serious) constraints (Teser, 2000). The unselected candidates are the losers the child’s EVAL will have rejected. [Figure 1] illustrates how GEN and EVAL work. | Figure 1: A diagram illustrating the works of hypothetical GEN and EVAL in generating and selecting word productions in optimality theory. Note that the input candidate 1 is the most faithful. From A. Peña-Brooks and M. N. Hegde, 2015, Assessment and treatment of speech sound disorders in children (3rd ed., p. 119). Copyright 2015, Austin, TX., Pro-Ed Inc. Reproduced with permission
Click here to view |
Speech Sound Disorders in Optimality Theory | |  |
OT proposes that children’s speech sound disorders are due to their phonological grammar ranking the constraints differently than the ranking in the adult grammar. A constraint ranked higher in a child’s phonological grammar than in the adult grammar prevents correct production of certain sounds. A few examples will illustrate the OT explanations of specific speech sound disorders. It should be noted that in all examples, a faithfulness constraint is ranked lower in the child’s grammar, hence violated, thus causing errors (Barlow, 2001):
Fronting errors are due to *DORSAL, a constraint that prevents velar outputs.
Final consonant deletions are due to *CODA, which bans final consonants.
Stopping errors (stops substituted for fricatives) are due to *FRICATIVES, which prohibits the child from producing fricatives.
Cluster reductions are due to *COMPLEX, which instructs the child not to produce sounds in clusters.
Epenthesis is due to a failure to obey DEP, a faithfulness constraint that tells the child not to insert an unnecessary vowel into words [see [Table 2]].
Many other kinds of errors are due to a violation of IDENT-FEATURE, a faithfulness constraint that instructs children to not change phoneme features of inputs in their outputs.
Evaluation of Optimality Theory | |  |
OT’s basic concepts—naturalness and markedness, innate universal constraints and their varied rankings across languages, and hypothetical GEN and EVAL are all questionable because they lack empirical support. OT’s terms are imprecise and are used with multiple meanings. Haspelmath (2006) has described 12 senses in which the term markedness is used in linguistics. Such multiple meanings and imprecise definitions make most of the OT’s terms ineffective in any theory (Haspelmath, 2006; Hume, 2011). Consequently, OT fails to explain the different patterns of speech production and variations across languages.
Even if naturalness and markedness are defined precisely, something common is not necessarily natural and what is natural is not always simple. Conversely, something uncommon is not inherently unnatural or something unnatural need not be essentially complex. Because both naturalness and markedness are dependent on how commonly sounds and their sequences appear across languages, the well-established statistical distribution of frequency is sufficient to describe them. There is no need for such pseudotechnical terms like naturalness and markedness. As Bernhardt and Stemberger (1998) have pointed out, naturalness and markedness cannot be independently verified and they are circular: sound sequences are natural because they are common, and they are common because they are natural. Haspelmath (2006), saying that “linguists can dispense with the term ‘markedness’ and many of the concepts that it has been used to express” (p. 63), suggested that markedness be replaced with phonetic difficulty. Most phoneticians, too, believe that the relative difficulty in mastering speech sounds is a function of the degree of articulatory effort, acoustic complexity, and the extent of perceptual clarity (Lowe, 1994). There is still a need to develop procedures to measure the degree of articulatory (phonetic) difficulty, but experimental phonetics, not phonology, is likely to make progress on that front.
An additional questionable assumption of naturalness and markedness, themselves of dubious heuristic value, is that frequently occurring sounds and their sequences are innate and do not need environmental assistance to learn. Equally questionable assumption is that less frequently occurring sounds have no genetic basis and hence are entirely environmentally induced. Specific speech sounds may occur more or less frequently, but all may be susceptible to genetic and environmental influences. Treatment research in speech sound disorders has amply documented that all sounds and sequences, regardless of variations in their frequency, are eminently teachable and learnable (Peña-Brooks and Hegde, 2022).
Although optimality theorists claim to have dispensed with phonological rules, their constraints are no different than rules. It does not matter whether such linguistic injunctions as no deletions, don’t change continuant, no clusters, no final consonants are constraints or rules. They might as well be described as phonologists’ orders.
OT has not generated any speech sound treatment procedures. OT’s effect on clinical practice has been on the selection of treatment targets. Traditionally, easier sounds (sounds that the child can imitate) and those that are typically acquired earlier have been the initial treatment targets. These are the unmarked sounds in OT. Several treatment research studies have shown that teaching complex, less common, or more difficult marked sounds before treating unmarked sound errors results in a system-wide generalization (Barlow, 2001; Barlow and Gierut, 1999; Bernhardt and Stemberger, 2011; Bernhardt and Stoel-Gammon, 1994; Storket, 2018). Whether this insight is indeed due to OT is uncertain. There is nothing in OT that suggests that complex sounds should be taught before simpler sounds. That the teaching of complex sounds will result in system-wide generalization is not a compelling deduction from OT. Without any theoretical push, experimentation with different treatment targets is an established treatment research strategy.
Finally, GEN and EVAL are invented mechanisms. OT’s assumption that these two mechanisms work in real time is improbable. It is not credible that children’s GEN suggest an infinite variety of output possibilities and the EVAL examines all of them and selects one for output (production) in real time. Verbal response-reaction chains with short reaction times observed in real world conversations render even the super-computing GEN and EVAL theoretical fictions.
General Evaluation of Phonological Theories | |  |
Phonology is a taxonomic and theoretical discipline whereas SLP is an empirical and experimental discipline. SLP shares a wider knowledge base with phonetics than with phonology. SLPs and experimental phoneticians (Beddor, 2015; Ohala, 1995) are committed to empirical methods of investigation. The substance of phonology is rational and nativist theoretical speculations. Much of the difficulty with phonological theories stem from a few overarching linguistic concepts: universality of phonological grammar, children’s innate knowledge of that grammar, and children’s rule-following. Consequently, the empirical validity of phonological theories is limited.
Universality | |  |
Generative language and phonological theorists entertain a grand ambition of finding mental, universal, eternal, and innate entities such as rules, deep structures, constraints, processes, representations, knowledge, and so forth. As soon as a “universal” rule or constraint is announced to the world, someone somewhere finds not one but plenty of embarrassing exceptions to it. For example, a natural phonology constraint is that there shall be word-final consonants. But several languages of the world, including Spanish and Vietnamese have few or no word-final consonants. As noted before, a constraint of the OT is that vowels must not be nasal. French and Asian languages, among many others, have nasal vowels. Because of multiple exceptions to every claim of language universals, the claim is essentially false. Even within a language, such as English or Kannada, regional dialectal variations are so deep and wide that any theory of innate, invariable, universal language structures becomes untenable.
Why do phonologists propose such rules and insist that they are universal against all odds? Based on limited observations of their own language, American English linguists and phonologists of the generative school tend to propose sweepingly broad theories that they insist are both universal and innate. A rule or a constraint that holds good for the mainstream American English, must be true of all languages. Chomsky (1957) started this trend that has gotten stronger and spread wider by the decade. Ironically, aspects of speech sound patterns and grammar of the mainstream American English does not apply even to several dialects of American English, including the Southern, Appalachian, and Black English. Anyone who has read Mark Twain’s Huckleberry Finn would be skeptic of language universals. And yet phonological and grammar rules derived mostly from American mainstream English are claimed to be universal. The world’s languages, however, being highly variable, and culturally, socially, and locally shaped, seem defiant of the English phonologists’ universal rules. For the generative linguists, language variations are a problem, a nuisance, and noise in the signal (Pisoni, 1997) that interferes with the task of developing theories of universal uniformity.
In face of the mounting evidence that what is most universal about languages of the world is their diversity, not uniformity, Chomsky double downed that “there is only one human language” (1995, 131). It must be the American mainstream English. This assertion is so egregious that it deserves no comment. But most linguists of non-generative school consider the universal grammar as either dead (Tomasello, 2009), a myth (Christiansen and Chater, 2009; Dąbrowska, 2015; Evans and Levinson, 2009), or useless at best (Ambridge et al., 2014).
In science, faulty theories are due to hasty and incomplete observations. In this paper, I would not go into possible geopolitical factors or racism. (See International Journal of Bilingual Education and Bilingualism, 2020, 23, 7—a special issue on linguistic racism; also see Frawley, 2007 and Kubota, 2020 on epistemological racism).
Most scientists abandon their premature theories when replicated and convincing contrary evidence emerges. But phonologists have a different approach. When certain universal rules turned out to be untrue (“violated”) in several languages, phonologists did not abandon their invalid rules. Instead, they shielded themselves by creating a new rule: universalrules are violable. Rules are universal, no doubt apply to all languages, but they do not apply to specific languages, and in which case, speakers have permission to violate them. This is a fine exemplar of an oxymoron. Linguistic universality is unassailable. No amount of contrary evidence can defeat it. This is like a government making a rule that you cannot steal from others, and upon finding out that people are stealing from each other, issuing a new rule: you cannot steal from others is a violable rule; you may steal from others.
The most compelling empirical reason for rejecting universal and innate rules is language diversity. Such rules, if they are truly operative, can only create uniform languages. It is true that languages share many common elements, but it is also true that languages differ in not so trivial ways. Language is a human activity and other human activities, too, share common features across people who differ in many significant ways. Instead of an excessive concern with universality, a serious study of language variability and change across time would give a pause to theorists (Cousse and von Mengden, 2014; Crowley and Bowerman, 2010). It is only a study of diversity of languages that can also detect empirically valid, observable, and measurable variables that are indeed more or less common across languages.
Innate Phonological Knowledge | |  |
That children have linguistic knowledge that they follow in learning to produce speech sounds is a fundamental assumption stemming from Chomsky’s (1957) universal grammar hypothesis. That assumption permeates phonology. Hayes (2004) described three kinds of phonological knowledge speakers have. First, speakers have a knowledge of phonological contrasts. The English speakers, for example, know that /p/ and /b/ contrast in the voicing feature. But in the Korean language, /p/ and /b/ are allophonic variants depending on the context; there is no voicing feature contrast. Therefore, the Korean speakers would not have the knowledge of /p/ and /b/ contrast. The Korean grammar would set a phonological default that /p/ and /b/ are the same (Hayes, 2004. But why would the Koreans need a default rule that /p/ and /b/ are the same? Is it because English makes a distinction between /p/ and /b/? The assumption seems to be that languages that do not conform to English phonological rules need default rules that inform the speakers that they need not worry about English rules. Why would Korean speakers be concerned about English /p/ and /b/ that do not exist in their language and do not produce someone else’s speech sounds? English distinctions are irrelevant to Korean speakers; they do not need a default rule. Such rules can only be described as linguistic Anglocentrism.
Second, speakers have a knowledge of legal structures (Hayes, 2004). Phonotactics (phoneme sequences) found in a language are legal; sequences not found in a language are illegal. In English, for example, /bl/ sequence is legal but a /bn/ is illegal. In optimality theory, a higher-ranked faithfulness constraint protects what is allowed in a language and a lower-ranked markedness constraint prohibits what is not allowed. Phonotactics are regularities found in sequenced speech sounds. But in phonological theories, phonotactic regularities are elevated to a judicial/extrajudicial status.
Third, speakers have a knowledge of alternate patterns of pronunciation of phonemes that depend on the context. For instance, the plural morpheme /s/ may be produced as /z/ when it follows a voiced sound (bagz as in bags) but as /s/ when it follows a voiceless consonant (hats as in hats). Hayes (2004) claims that 8 to 10-month-old infants have knowledge of such alternate pronunciation rules. Hayes also claims that infants have the knowledge of OT’s constraints. These are extraordinary claim about infants’ knowledge for which Hayes offers no evidence.
The child speech development data do not support Hayes’s (2004) claims. Typically learning children do delete phonemes in the initial stages of speech sound learning; they do not seem to have the knowledge of OT’s MAX constraint. Sooner or later, they begin to produce the sounds correctly. For the phonologist, the observation that children do not hear certain sound sequences in their language is negative evidence. There is a rule for that, too: children learning their sound sequences must make no use of negative evidence (Hayes, 2004). It is not clear how a child (or a scientist) can discard nonexistent negative evidence.
It is ridiculousness to claim that children’s speech productions follow some arbitrary theoretical rules that phonologists have invented. Such rules are summary statements of observed speech patterns in children that reflect the speech practices of their immediate verbal community. No child in the world has ever heard of linguistic rules or constraints. There are no such rules or constraints imposed on speech or a language. Phonologists, not children, derived those rules from patterns of speech and language production. Rules did not precede speech; speech preceded rules that phonologists formulated. These rules have nothing to do with children’s speech sound learning or their knowledge.
Some phonologists claim that children themselves rank the OT’s phonological constraints. Children are little linguists. For instance, Gnanadesikan (2004) who analyzed her two-year-old daughter Gitanjali’s (G) speech sound productions, claimed that her daughter’s phonology differed from that of the target English because “G still ranks certain markedness constraints above certain faithfulness constraints.” (p. 74). Presumably, the two-year-old child did what her mother the linguist did. Gnanadesikan found that simpler, easy to produce, frequently occurring sounds and syllables (unmarked) were dominant in her daughter’s speech samples. Compelled to explain this puzzling phenomenon, Gnanadesikan theorized that her two-year-old produced mostly simpler sounds because she had ranked the markedness constraint *COMPLEX (do not produce complex structures) above faithfulness constraint (correct production of complex sounds). If only G had ranked complex productions higher than simple productions, the two-year-old child’s speech would have been complex enough to match the adult model (input). Gnanadesikan did not specify why her child did not do that. Therefore, the problem lies not in a young child’s limited articulatory skills, but in the child’s mixed-up constraint rankings. This is but one instance of hypothetical mechanisms displacing empirical reality.
Grammar is an autonomous agent that speaks in OT. Non-phonologists assume that people speak. But phonologists replace speakers with grammars. Who produces syllables? Apparently not children or adults. According to Gnanadesikan (2004), “. . . grammar produces syllables. . .” (p. 74).
OT’s constraints work both the ways and shield the theorists from conflicting data. As noted, they explain children’s simpler and less accurate productions in terms of their failure to promote faithfulness constraint above *COMPLEX (no complex productions). However, if a child unexpectedly produced complex utterances—which happens fairly frequently—it is because the child’s phonological grammar allowed the marked (complex) utterances, so the child produced them (Gnanadesikan (2004). Presence or absence of complex productions are easily explained: the child’s grammar allowed it or prohibited it. Observed were the speech productions; unobserved were the grammars.
There are no explanations in OT. The theorist will have to explain why there are constraints and why children obey constraint such as the one that bans complex utterances. They will have to explain why one-year-olds do not put the faithfulness constraint at the top of the list, so they can get ahead of themselves. Because there is no independent evidence for the existence of constraints or the child’s ranking of them, it is no more explanatory than saying that children’s speech productions are simpler than that of adults.
Speech sound production learning does not require a knowledge of the sound systems or phonotactic rules. Children do not know and need not know that speech sound features are stacked up one over the other like floors of a multistory building or laid out as a string of flowers in front of them. They do not know OT’s constraints or their rankings. To produce words without voicing the initial /p/ and to produce words with voicing the initial /b/, no child needs to know the voicing feature of phonemes. Children just have to hear adults produce words.
There is no empirical basis to claim that knowledge of skills that experts develop is necessary to learn and execute those skills. The claim that children cannot learn to produce their speech sounds without phonological knowledge is analogous to the patently false claim that a person cannot learn to ride a bicycle without the knowledge of the physical laws of motion. Ohala (2005), an experimental phonetician, put it this way: “Even a rock obeys the laws of physics without having to know them” (p. 38). We eat, walk, see, and hear without consciously or unconsciously knowing about the physical constraints of those actions, Ohala pointed out. Analysis of speech samples makes it clear that speech sounds are patterned, as are other behaviors. However, knowledge of patterns is unnecessary to learn behaviors that form patterns; after learning patterned behaviors, a knowledge of their patterns is unnecessary to maintain the behaviors. Other than the language experts, no speaker can describe the various phonological rules and constraints. Phonologists counter this criticism by saying that speakers have unconscious knowledge, a contradiction.
Phonological Rule Following | |  |
A statement often-repeated without offering evidence is that language is rule-governed. Empirical linguistic theories as well as behavioral analysis reject the view that language learning involves rule-following. Rules are unnecessary to learn speech sound productions. English-speaking children do not struggle to produce the /mb/ sequence in word-initial positions not because the children are told that the sequence is “banned” in English, but simply because the children have not heard that sequence. Children and their caregivers know nothing about constraints that ban speech sounds or their sequences. Children are unaware of legal and illegal sound combinations.
Rule-following behaviors may be explicitly taught, however. Children and adults may be taught to follow an explicitly stated rule by reinforcing the actions the rule specifies. Children may be taught not to touch hot surfaces. Stop the car for the red light is a rule that drivers are taught. Speakers of foreign languages may learn grammar rules, follow them, and forget them when they acquire native-like second language skills. Rule-governed learner can usually state the rule. Only the behaviors explicitly taught to follow stated rules are rule-governed. Childhood speech and language learning in the natural environment is not rule-governed. Learning more advanced verbal skills of writing and speaking under explicit instruction is likely rule-governed.
Rule-governed behaviors are a behavioral learning phenomenon, but rule extraction is a scientific-analytic activity. Scientists in all disciplines extract abstract laws or rules from patterns they observe in the phenomenon they study. Phonologists, too, extract rules from patterns of speech sound production in children and adults. In all sciences, extracted rules are succinct descriptions of behavioral patterns, but they are not explanations. As summary statements, phonological patterns are useful, but they do not suggest the child is following rules. Rules that are extracted from patterns of behaviors cannot be driven back to the child’s mind—an undefinable and mysterious territory.
Empirical Invalidity of Phonological Theories | |  |
SLPs should be most concerned with the dubious empirical validity of phonological theories. Inductive method in which data precede theory has a better chance at avoiding premature theories, but theoretical linguists are not known for collecting empirical speech and language data in naturalistic social communicative contexts. In fact, the generative linguists have consistently denied the importance learning a language in social contexts. Even more astonishingly, generative grammarians had disavowed any interest in what speakers actually do. Chomsky and Halle (1968, p.3) had warned their readers that their theory of phonology should not be confused with “what the speaker and hearer actually does.” There is no danger of such confusion. It is clear to empirical scientists that the generative theory has nothing to do with real speakers and their speech and language behaviors. Paradoxically, confused are the phonologists themselves. Most phonologists, while building theories based on self-generated speech corpus that are removed several steps from natural speech, nonetheless “treat their corpora as accounts (albeit partial) of things that speaker-hearers do” (Docherty and Foulkes, 2000, p. 115). Ohala (1995, p. 751) had cautioned that “without having an ‘anchor’ in the real world, phonology risks having its claim apply only in an imaginary universe of little interest to those outside the narrowly circumscribed world of autonomous phonology.”
Empirical validity sinks to null when cognitivism, mentalism, rationalism, and nativism are rolled into a deductive theory. It is difficult to see parallels between what the theories say speakers do and what the speakers seem to do in social contexts. It is hard to believe that mental mechanical devices tell children “no final consonants,” “no clusters,” “no fricatives,” and so forth. It appears that the function of such constraints is to induce speech sound disorders. It is empirically improbable that children produce all phonemes of a word just because the hypothetical constraint (MAX) whispers do not delete any phonemes. It is illogical to assume that children follow the constraint “no final consonants” when their language is full of them. It is not clear why children’s grammar says “no fricatives” when they hear fricative all the time in their environment. To produce their nasalized vowels, Kannada or Hindi speaking children do not need English phonologists’ permission to violate the dictum no nasalized vowels. Those children do not feel like they are violating any rule because they know nothing about them.
There is no empirical evidence to support the claim that when children are stimulated to say something, their GEN gets busy generating an infinite variety of output options, some clearly wrong [see [Figure 1]]. A child who is stimulated to say “Mom,” in a social communicative context does not think of infinite output possibilities, including totally unrelated and irrelevant utterances (e.g., “bom” or “Tom”). Why would GEN suggest anything other than the correct response? Perhaps to justify the existence of another hypothetical mechanism EVAL, which would have nothing to do if GEN suggested only the correct response.
The hypothesis that child or adult speakers hierarchically organize the different articulatory-physiologic, acoustic, prosodic, and other aspects of speech in tiers is not credible. It is difficult if not impossible to demonstrate that speakers first move to the tier of segments and then to the tier of tonal variations. When someone says a phrase with a certain tonal (prosodic) features, segments and tonal variations are organized neither into downstairs-upstairs tiers nor as linear strings on a flat surface. Speech consists of integrated movements that advance rapidly. At the instance of production, properties of speech are not separate components of tiers or strings.
When phonologists find that some aspect of speech production is missing in their theory, they do not move to collect data to fill the gaps in their theory. Instead, they draw a new graph, a box, a figure, a linguistic tree, or some such graphic design, insert the name of the missing phenomenon into the design, and consider the job done. Noticing that the classic Chomsky and Halle (1968) theory did not address tonal variations of speech, Goldsmith (1990)inserted a tier of tones and placed it above the sound segments in his new diagram. There is nothing empirical about such drawing-table theory building.
Generative phonological theorists set aside a staggering number of empirical variables that are known to be important in speech-language learning. Such theorists make no room for environmental stimulus conditions that trigger speech and language responses. A standard linguistic criticism of language learning hypothesis was that nobody teaches the rules of phonology and linguistics to young children, and hence, speech and language could not be learned. Several significant lines of investigation briefly reviewed in the following paragraphs have contradicted this often-repeated statement.
That most parents do not explicitly teach the rules of phonology and grammar does not mean that they do not teach speech and language skills. These skills are learned implicitly, not by explicit teaching of linguistic rules. Contingencies of verbal interactions teach speech-language skills, not explicit rule-teaching (Hegde, 2010). A line of investigation on implicit learning of speech and language has reliably confirmed this view. Implicit learning is learning in natural contexts in which children interact with their caregivers, other family members, peers in schools, and in larger social contexts (see Arnon, 2019; Ellis, 2015; Segar, 1994; Rebuschat, 2015 for overview of studies).
Another vast body of research on infant-directed speech (IDS) has confirmed that caregiver-infant social interactions help teach speech and language skills in an implicit manner (Segar, 1994). Parents do offer natural consequences for infant speech attempts. There is plenty of modeling-imitation-and-modeling loop that reinforces infants’ speech. Multiple replicated studies have demonstrated that IDS facilitates both speech and language learning. Consistent with the behavioral view of language learning, caregiver-infant verbal interactions are contingent on each other, reinforce each other’s behavior, and are consistent with the methods of teaching verbal skills to young children (Bomstein, et al., 2015; Eaves et al., 2016; Goldstein and Schwade, 2008; Masataka, 1993; Miller, 2014; Ramirez-Esparza et al., 2017; Segar, 1994; Thiessen et al., 2005). The empty mechanical operation of input the phonologists propose does not capture the complexity or the significance of IDS that promotes children’s learning of speech sounds and other elements of language.
For lack of space, I will only briefly mention a few other lines of investigation that have affirmed the role of social interactions in teaching speech-language skills to children. A series of studies on statistical learning has shown that the frequency with which children hear the speech sounds and their sequences in words is significantly correlated with speech sound mastery. Generally, children master sooner the sounds they her more frequently than those that they hear less frequently (see Arnon, 2019; Frost and Armstrong, 2019; Saffran, 2020 for reviews of research). Similarly, in learning speech and language skills, sociolinguistic studies have emphasized the importance of social interactions, not innate universal grammars. Sociolinguists also have documented significant diversity, not universality, of languages (Foulkes and Docherty, 2006). Finally, the exemplar and usage-based phonological theories have postulated that children learn speech and language by the exemplars they hear frequently. Exemplars are utterances adults produce and children hear. Children who repeatedly hear exemplars learn to produce them. Usage-based phonology, much more empirical than generative phonology, suggests that speech sound learning is made possible by their frequent use (productions) in social contexts (Cousse and von Mengden, 2014; Guy, 2014; Tomasello, 2003). These theories collectively suggest that the frequency with which children hear the speech sounds in words, social interactions, and exemplars the caregivers provide is helpful in learning speech sounds (see Peña-Brooks and Hegde, 2022 for more detailed review of phonological theories). Those suggestions are consistent with the behavioral analysis of speech-language learning (Hegde, 2010).
Variables related to ethnocultural conditions and the child’s family environment do not figure in phonological theories. Effects of speech-language development of parental education, especially the educational level of mothers, the socioeconomic conditions of the family, child poverty, and abuse and abandonment, have been well-documented (Peña-Brooks and Hegde, 2022), but such variables play no role in theories that emphasize innate universal rules for speech and languages learning.
Several phonological theorists ignore the influence of well-established phonetic factors in speech sound production (Behrman, 2023; Raphael et al., 2012). An exception is Stampe’s (1979) natural phonology which takes phonetic and physiological factors into consideration. The OT adherents are especially vulnerable to the charge that they woefully ignore the workings of the speech production mechanism, its limitations in children, and phonetic sequencing factors. Aerodynamic factors, physical and acoustic features of speech, physiologic, motoric, and neuromuscular variables also play no role in deductive and generative phonological theories.
A pervasive mentalism pushes generative phonological theories, especially the OT, outside the boundaries of empirical sciences. They make untestable assertions. In generative grammar theories, language phenomenon of significance happens in an unobservable mental underground. Observable speech-language behaviors are superficial products of such hypothetical mental mechanisms as deep or cognitive structures, universal grammars, GEN and EVAL, and input representations. As Ohala put it, “Explanation is, after all, reducing the unknown to the known, not to further unknown, uncertain, or unprovable entities” (Ohala 1996; p. 262), but phonological theories complicate and obscure speech and language. Therefore, it is a puzzle as to why SLPs, who need to observe, measure, experiment, and stay close to data in their theories get attracted to obscure mentalism that banishes observable phenomena into unobservable territory. It is better for the clinician to heed Ohala’s conclusion: “there may be no need for features, underspecification, autosegmental notation, feature geometry, or similar speculative devices” (2005, p. 35). Imaginary mechanisms offer no clinical advantage and most likely a hindrance to developing natural science accounts of speech and language.
Financial support and sponsorship
Nil.
Conflicts of interest
I have no known conflict of interest to disclose.
References | |  |
1. | Ambridge, B, Pine, J. M, & Lieven, E. V. M (2014). Child language acquisition: Why universal grammar doesn’t help. Language, 90(3), Perspectives e56-e90. https://doi.org/10.1353/lan.2014.0051. |
2. | Arnon, I (2019). Statistical learning, implicit learning, and first language acquisition: A critical evaluation of two developmental predictions. Topics in Cognitive Science, 11, 504–519. https://doi.org/10.1111/tops.12428. |
3. | Ball, M. J, Muller, N, & Rutter, B (2010). Phonology for communication disorders. Psychology Press. |
4. | Barlow, J. A (2001). Case study: Optimality theory and the assessment and treatment of phonological disorders. Language, Speech, and Hearing Services in Schools, 32, 242–256. https://doi.org/10.1044/0161-1461(2001/022). |
5. | Barlow, J. A, & Gierut, J. A (1999). Optimality theory in phonological acquisition. Journal of Speech, Language, and Hearing Research, 42, 1482–1498. |
6. | Beddor, P S (2015). Experimental phonetics. In Heine, B & Narrog, H (Eds.), The Oxford handbook of linguistic analysis (2nd ed.). Oxford University Press. doi: 10.1093/oxfordhb/9780199677078.013.0048. |
7. | Behrman, A (2023). Speech and voice science (4th ed.). Plural Publishing. |
8. | Bernhardt, B, & Stemberger, J. P (1998). Handbook of phonological development: From the perspective of constraint-based nonlinear phonology. Academic Press. |
9. | Bernhardt, B, & Stemberger, J. P (2011). Constraint-based nonlinear phonological theories: Applications and implications. In Ball, M J, Perkins, M R, & Howard, S (Eds.), The handbook of clinical linguistics (pp. 423–438). Wiley-Blackwell. |
10. | Bernhardt, B, & Stoel-Gammon, C (1994). Nonlinear phonology: Introduction and clinical application. Journal of Speech and Hearing Research, 37, 123–143. https://doi.org/10.1044/jshr.3701.123. |
11. | Bomstein, M. H, Putnick, D. L, Cote, L. R (2015). Mother-infant contingent vocalizations in 11 countries. Psychological Science, 26(8), 1272–1284. https://doi.org/10.1177/0956797615586796. |
12. | Cardona, G (1998). Panini: A survey of research. Motilal Banarsidas. |
13. | Chomsky, N (1957). Syntactic structures. Mouton. |
14. | Chomsky, N (1995). The minimalist program. The MIT Press. |
15. | Chomsky, N, & Halle, M (1968). The sound pattern of English. Harper & Row. |
16. | Christiansen, M. H, & Chater, N (2009). The myth of language universals and the myth of universal grammar. Behavioral and Brain Sciences, 32(5), 452–453. |
17. | Cousse, E, von Mengden, F, (Eds.) (2014). Usage-based approach to language change. John Benjamin Publishing Company. |
18. | Crowley, T, & Bowerman, C (2010). An introduction to historical linguistics (4th ed.). Oxford University Press. |
19. | Dąbrowska, E (2015). What exactly is universal grammar, and has anyone seen it? Frontiers in Psychology, 6, 852. https://doi.org/10.3389/fpsyg.2015.00852. |
20. | Dinnsen, D A, & Gierut, J A (Eds.). (2008). Optimality theory, phonological acquisition and disorders. Equinox. |
21. | Docherty, G, & Foulkes, P (2000). Speaker, speech, and knowledge of speech. In Burton-Roberts, N, Carr, P, & Docherty, D (Eds.), Phonological knowledge (pp. 105–129). Oxford University Press. |
22. | Donegan, P. J, & Stampe, D (1979). The study of natural phonology. In Dinnsen, D A (Ed.), Current approaches to phonological theory (pp. 126–143). Indiana University Press. |
23. | Evans, N, & Levinson, S. C (2009). The myth of language universals: Language diversity and its importance for cognitive science. Behavioral and Brain Sciences, 32, 429–492. https://doi.org/10.1017/S0140525X0999094X. |
24. | Eaves, B. S, Feldman, N. H, Griffiths, T. L, & Shafto, P (2016). Infant-directed speech is consistent with teaching. Psychological Review, 123(6), 758–771. https://doi.org/10.1037/rev0000031. |
25. | Ellis, N. C (2015). Implicit and explicit SLA and their interface. In Sanz, C & Leow, R P (Eds.), Implicit and explicit language learning (pp. 35–47). George Town University Press. |
26. | Foulkes, P, & Docherty, G (2006). The social life of phonetics and phonology. Journal of Phonetics, 34, 409–438. https://doi.org/10.1016/j.wocn.2005.08.002. |
27. | Frawley, D (2007). The hidden racism of linguistics authors. World Affairs: The Journal of International Issues, 11(3), 142–152. |
28. | Frost, R, & Armstrong, B. C (2019). Statistical learning research: A critical review and possible new directions. Psychological Bulletin, 145(12), 1128–1153. https://doi.org/10.1037/bul0000210. |
29. | Gnanadesikan, A (2004). Markedness and faithfulness constraints in child phonology. In Kager, R, Pater, J, & Zonneveld, W (Eds.), Constraints in phonological acquisition (pp. 73–108). Cambridge University Press. |
30. | Goldsmith, J. A (1979). Autosegmental phonology. Garland Press. |
31. | Goldsmith, J. A (1990). Autosegmental and metrical phonology. Blackwell. |
32. | Goldstein, M. H, & Schwade, J. A (2008). Social feedback to infants’ babbling facilitates rapid phonological learning. Psychological Science, 19(5), 515–523. https://doi.org/10.2307/40064787. |
33. | Guy, G. R (2014). Linking usage and grammar: Generative phonology, exemplar theory, and variable rules. Lingua, 142, 57–65. https://doi.org/10.1016/j.lingua.2012.07.007. |
34. | Haspelmath, M (2006). Against markedness (and what to replace it with). Journal of Linguistics, 42, 25–70. https://doi.org/10.1017/S0022226705003683. |
35. | Hayes, B (2004). Phonological acquisition in optimality theory: The early states. In Kager, R, Pater, J, & Zonneveld, W (Eds.), Constraints in phonological acquisition (pp. 158–203). Cambridge University Press. |
36. | Hegde, M. N (2010). Language and grammar: A behavioral analysis. Journal of Speech-Language Pathology and Applied Behavior Analysis, 5, 90–113. https://doi.org/10.1037/h0100268. |
37. | Hume, E (2011). Markedness. In Van Oostendorp, M, Ewen, C, Hume, E, & Rice, K (Eds.), The Blackwell companion to phonology (Vol. 1, pp. 79–106). Wiley. |
38. | Jakobson, R (1968). Child language, aphasia and phonological universals (A. R. Keiler, Trans.). Mouton. (Original work published 1941). |
39. | Kubota, R (2020). Confronting epistemological racism, decolonizing scholarly knowledge: Race and gender in applied linguistics. Applied Linguistics, 41(5), 712–732. https://doi.org/10.1093/applin/amz033. |
40. | Lowe, R. J (1994). Phonology: Assessment and intervention applications in speech pathology. Williams & Wilkins. |
41. | Masataka, N (1993). Effects of contingent and noncontingent maternal stimulation on the vocal behaviour of three- to four-month-old Japanese infants. Journal of Child Language, 20(2), 303–12. https://doi.org/10.1017/S0305000900008291. |
42. | McCarthy, J, & Prince, A (1994). The emergence of the unmarked: Optimality in prosodic morphology. Northeastern Linguistic Society, 24, 333–379. https://doi.org/10.1017/s0305000900008291. |
43. | Miccio, A, & Scarpino, S. E (2011). Phonological analysis, phonological processes. In Ball, M J, Perkins, M P, Muller, N, & Howard, S (Eds.), The handbook of clinical linguistics (pp. 414–422). Wiley-Blackwell. |
44. | Miller, J (2014). Effects of familiar contingencies on infants’ vocal behavior in new communicative contexts. Developmental Psychobiology, 56(7), 1518–1527. https://doi.org/10.1002/dev.21245. |
45. | Ohala, J. J (1995). Experimental phonology. In Goldsmith, J A (Ed.), A handbook of phonological theory (pp. 713–722). Blackwell. |
46. | Ohala, J. J (1996). Phonetics of sound change. In Jones, C (Ed.), Historical linguistics: Problems and prospects (pp. 237–278). Longman. |
47. | Ohala, J. J (2005). Phonetic explanations for sound patterns: Implications for grammars of competence. In Hardcastle, W J & Beck, J M (Eds.), A figure of speech. A festschrift for John Laver (pp. 23–38 ). Erlbaum. |
48. | Peña-Brooks, A, & Hegde, M. N (2015). Assessment and treatment of speech sound disorders in children (3rd ed.). Pro-Ed Inc. |
49. | Peña-Brooks, A, & Hegde, M. N (2022). Assessment and treatment of speech sound disorders in children (4th ed.). Pro-Ed. |
50. | Pisoni, D. B (1997). Some thoughts on “normalization” in speech perception. In Johnson, K & Mullennix, J W (Eds.), Talker variability in speech processing (pp. 9–32). Academic Press. |
51. | Prince, A, & Smolensky, P (1993). Optimality theory: Constraint interaction in generative grammar (Tech. Rep. No. 2). Rutgers Center for Cognitive Science. |
52. | Ramirez-Esparza, N, Garcia-Sierra, A, & Kuhl, P (2017). Look who’s talking now! Parentese speech, social context, and language development across time. Frontiers in Psychology, 8, 1008. https://doi.org/10.3389/fpsyg.2017.01008. |
53. | Raphael, L. J, Borden, G. J, & Harris, K. S (2012). Speech science primer (6th ed.). Lippincott Williams & Wilkins. |
54. | Rebuschat, P (Ed.). (2015). Implicit and explicit learning of languages. John Benjamin Publishing Company. |
55. | Reynolds, J (1990). Abnormal vowel patterns in phonologically disordered children: Some data and a hypothesis. British Journal of Disorders of Communication, 25, 115–148. https://doi.org/10.3109/13682829009011970. |
56. | Saffran, J. R (2020). Statistical language learning in infancy. Child Development Perspectives, 14(1), 49–54. https://doi.org/10.1111/cdep.12355. |
57. | Schwartz, R. G (1992). Clinical application of recent advances in phonological theory. Language, Speech, and Hearing Services in Schools, 23, 269–276. https://doi.org/10.1044/0161-1461.2303.269. |
58. | Segar, C. A (1994). Implicit learning. Psychological Bulletin, 115(2), 163–196. https://doi.org/10.1044/2017_LSHSS-17-0082. |
59. | Shukla, S (2006). Panini. In Brown, K (Ed.), Encyclopedia of language and linguistics (2nd ed.) (pp. 153–155). Elsevier. |
60. | Stampe, D (1979). A dissertation on natural phonology. Garland. |
61. | Storket, H. L (2018). The complexity approach to phonological treatment: How to select treatment targets. Language, Speech, and Hearing Services in Schools, 49(3), 463–481. https://doi.org/10.1044/2017_LSHSS-17-0082. |
62. | Teser, B (2000). On the role of optimality and strict domination in language learning. In Dekkers, J, van der Leeuw, F, & van de Weijer, J (Eds.), Optimality theory: Phonology, syntax, and acquisition (pp. 592–620). Oxford University Press. |
63. | Thiessen, E. D, Hill, E. A, & Saffran, J. R (2005). Infant-directed speech facilitates word segmentation. Infancy, 7(1), 53–71. https://doi.org/10.1207/s15327078in0701_5. |
64. | Tomasello, M (2003). Constructing a language. a usage-based theory of language acquisition. Harvard University Press. |
65. | Tomasello, M (2009). Universal grammar is dead. Behavior and Brain Sciences, 32(5), 470–471. https://doi.org/10.1017/S0140525X09990744. |
66. | Trubetzkoy, N. S (1969). Principles of phonology (C. A. M. Baltaxe, Trans.). University of California Press. (Original work published 1939.) |
67. | Wellman, B, Case, I, Mengert, I, & Bradbury, D (1931). Speech sounds of young children. University of Iowa Studies in Child Welfare, 5(2), 82. |
68. | Yavas, M (Ed.). (1994). First and second language phonology. Singular. |
[Figure 1]
[Table 1], [Table 2], [Table 3]
|