THE LANGUAGE VARIATION OF THE LINGUISTIC INVARIANTS

Farida Allahverdieva. Ph.d. Asistant Professor, English Grammar Department, Azerbaijan University of Languages, Baku, Azerbaijan. ...................................................................................................................... Manuscript Info Abstract ......................... ........................................................................ Manuscript History Received: 03 May 2019 Final Accepted: 05 June 2019 Published: July 2019


ISSN: 2320-5407
Int. J. Adv. Res. 7 (7), 84-91 85 ordering can be chosen arbitrarily: its only goal is to enable the logic to simulate a Turing machine, which of course has the input on its tape in some order. That is, one needs an ordering to express a property, but it does not matter which order to use; any order would do. Properties expressed in this fashion are called order-invariant. Since many results on capturing complexity classes require an ordering that is used in an invariant fashion, the notion is of interest. Before studying it for expressive logics like least-fix point, one would want to understand its behavior in simpler settings, like first-order logic (FO). Several attempts to do so, however, show that the notion is much harder to deal with than it initially appears.

Scope of Study
The scope of study studies how to research in image processing, in which it is widely regarded as important for the representations of images to be invariant to rotation and scaling. What we should want is a representation of sentence meaning that is invariant to diathesis, other regular syntactic alternations in the assignment of argument structure, and, ideally, even invariant to other meaning preserving or near-preserving paraphrases.
Existing evaluations of distributional semantic models fall short of measuring this. The first evaluation is a relation classification task, where a semantic model is tested on its ability to recognize whether a pair of sentences both contain a particular semantic relation. The second task is a question answering task, the goal of which is to locate the sentence in a document that contains the answer. Here, the semantic model must match the question, which expresses a proposition with a missing argument, to the answer-bearing sentence which contains the full proposition. We apply these new evaluation protocols to several recent distributional models, extending several of them to build sentence representations. (6,23) Research Methodology:-A number of methods and linguistic research showed that all linguistic sentences have their own invariant models.

The Language Variation of the Linguistic Invariants
How we speak is influenced by many things. Firstly, there is learning to speak English itself, where how we pronounce our words is all part of learning how to speak and copying the speech of those around us. If we are born into bilingual families, then we may learn to speak English alongside another language. Then, as we go to school, we learn to read and to write a written equivalent of English speech that has standardised forms of spelling, punctuation and grammar, known as standard English. Depending upon yourhome background, you may speak standard English with its associated accent Received Pronunciation (RP). Or, you may speak standard English with a regional accent such as that associated with Birmingham or the Black Country, and you might include a few dialect features different from standard English, such as vocabulary (bostin for good) or grammar ( ay for I'm not) in your speech.
As we move across the country we experience not only changing landscape and architecture but also a gradual change in the sounds we hear, in the accents and dialects that relate to the place in which they are spoken and to which they belong. This phonomenon is known as a dialect continuum. The terms accent and dialect are often used interchangeably, although in linguistic terms they refer to two different aspects of language variation.
As a living language, English changes over time and varies according to place and social setting. Some people find this upsetting, and perceive English changing as English degenerating. But, life does not stand still and today's world is characterised by more rapid social and economic change than at any other time in history that in turn has an impact upon uses of English.
Although Standard English vocabulary is described in dictionaries, these have to be updated on a regular basis to take account of changing social patterns, so that ways of life and vocabulary associated with it that can be edited out and new words added in. For example, in the nineteenth century and early twentieth century, there was a range of vocabulary associated with horses as a means of transport that we no longer use, replaced by trains and cars. Technology has brought with it not only new words, such as computer but also added meanings to already existing words. A mouse used to refer to a small mammal, but now also means a piece of equipment that allows you to navigate a computer screen (and if you are reading this, you may have your hand on it). You may find that your grandparents and older people generally use or know dialect features far more than you do or your parents, partly as a reflection of the world in which they were brought up.
86 Standard English is also described in grammars of English. Linguists once thought that all languages fitted into one general grammatical patterning known as universal grammar. We now know that this is not the case, and that different languages have different grammatical categories. What was once thought of as a singular concept then, grammar, has become pluralised, so we can talk of grammars. The advent of new technologies such as sound recording and the internet together with the increasing digitisation of data has allowed linguists to compile data sets of language use known as a corpus, or if more than one, corpora. Modern dictionaries and grammars of English are increasingly based upon language corpora and thus how language is actually used rather than how a linguist might think it is or should be used. Corpora of spoken English have also made us aware that speech has just as much a grammatical structure as writing and so there are now grammars of spoken English just as there are of written English.
At one time, the variation from Standard English apparent in regional dialects was thought to be idiosyncratic, illogical and thus ‗incorrect' and its speakers often thought of as ‗unintelligent'. However, linguistic investigation into regional accents and dialects has shown that all varieties of English to be found in the UK today, as well as other varieties across the world such as General American, are rule-governed. Thus, it is not the case that standard English is linguistically a better or more superior variety of English. Its prestige lies in the social value given to it as the language of education, the law, public administration and so on.
In the middle of the twentieth century, education became universally and freely available in England, and children by law had to go to school until the age of fourteen, then fifteen until the 1970s, sixteen until the turn of this century until they are adult, at the age of eighteen. As our world has become increasingly reliant upon literacy, so the adult population has become increasingly more literate. Literacy in English has evolved over several hundred years, and reading and writing standard English is taught in school, since it is the variety of English required for educational purposes and other realms of public life. Pupils who come from backgrounds where a regional variety of English (or another language) is spoken at homeand you may be one of themmay find that learning to read and to write may influence the way that they speak, so that speech becomes a mixture of standard English and regional variation. How we speak and write then, is influenced by the context, audience and purpose of the linguistic situation in which we find ourselves. We may speak in a regional variety with family and friends, but at work, or at school or university, accommodate or modify the way we speak to sound more standard.
Since the publication of Noam Chomsky's field founding Syntactic Structures in 1957, generative grammarians have been formulating and studying the grammars of particular languages to extract from them what is general across languages. The idea is that properties which all languages have will give us some insight into the nature of mind. A widely acknowledged problem to which this work has led is how to reconcile the goal of generalization with language specific phenomena and the cross language variation they induce. Good science requires that cross linguistically valid generalizations be based on accurate, precise and thorough descriptions of particular languages. But such work on any given language increasingly leads us to describe language specific phenomena: irregular verbs, exceptions to paradigms, lexically conditioned rules, etc. So this work and cross language generalization seem to pull in opposite directions.
Here we propose an approach in which these two forces are reconciled. Our solution, presented in greater depth in Bare Grammar, is built on the notion of linguistic invariant. On our approach different languages do have nontrivially different grammars: their grammatical categories are defened internal to the language and may fail to be comparable to ones used for other languages. Their rules, ways of building complex expressions from simpler ones, may also fail to be isomorphic across languages. So languages differ. Nonetheless certain properties and relations may be invariant in all natural language grammars, as we will see below. And it is to these linguistic invariants that we should look for properties of mind.
Our approach contrasts with that of the most widely adopted linguistic theories, where the dominant idea is that there is only one grammar, the grammars of particular languages being, somehow, special cases. This has led to a mode of description in which grammars of particular languages are given in a notationally uniform way: the grammatical categories of all languages are drawn from a fixed universal set, as are the rules characterizing complex expressions in terms of their components. It has also led to the postulation of a level of unobservable structure (-LF", suggesting -Logical Form"), where structural properties of observable expressions may be changed in important ways. So this allows that structural generalizations which appear to be false on the basis of observable expressions may be true at LF where structural properties have been modified.

87
The idea of compositionality has been central to understanding contemporary natural language semantics from an historiographic perspective. The idea is often credited to Frege, although in fact Frege had very little to say about compositionality that had not already been repeated since the time of Aristotle. Our modern notion of compositionality took shape primarily with the work of Tarski, who was actually arguing that a central difference between formal languages and natural languages is that natural language is not compositional. This in turn was the -the contention that an important theoretical difference exists between formal and natural languages,‖ that Richard Montague so famously rejected. Compositionality also features prominently in Fodor and Pylyshyn's rejection of early connectionist representations of natural language semantics, which seems to have influenced Mitchell and Lapata as well. Logic-based forms of compositional semantics have long strived for syntactic invariance in meaning representations, which is known as the doctrine of the canonical form. The traditional justification for canonical forms is that they allow easy access to a knowledge base to retrieve some desired information, which amounts to a form of inference. Our work can be seen as an extension of this notion to distributional semantic models with a more general notion of representational similarity and inference. There are many regular alternations that semantics models have tried to account for such as passive or dative alternations. There are also many lexical paraphrases which can take drastically different syntactic forms. Take the following example from Poon and Domingos, in which the same semantic relation can be expressed by a transitive verb or an attributive prepositional phrase: (6,19) Turkey borders Iran. Turkey is next to Iran.
Fundamentally, this method was designed as a demonstration that compositionality in computing phrasal semantic representations does not interfere with the ability of a representation to synthesize non-compositional collocation effects that contribute to the disambiguation of homographs. Here, word-sense disambiguation is implicitly viewed as a very restricted, highly lexicalized case of inference for selecting the appropriate disjunction in the representation of a word's meaning.
Parsing accuracy has been used as a preliminary evaluation of semantic models that produce syntactic structure. However, syntax does not always reflect semantic content, and we are specifically interested in supporting syntactic invariance when doing semantic inference. Also, this type of evaluation is tied to a particular grammar formalism. The existing evaluations that are most similar in spirit to what we propose are paraphrase detection tasks that do not assume a restricted syntactic context. (5,56) While such sentences do support exactly the same inferences, we are also interested in the inferences that can be made from similar sentences that are not paraphrases according to this strict definitiona situation that is more often encountered in end applications. Thus, we adopt a less restricted notion of paraphrases.
Tom is pretty smart. 't o m ɪ z 'p r ɪ t ɪ s m α: t || Tom isn't pretty smart. 't o m 'ɪ z n t 'p r ɪ t ɪ s m α: t || He is far taller than his father. h i ɪ z 'f α: 't ɔ: l ə ∂ ə n h ɪ z f α: ∂ ə || He isn't far taller than his father. h i 'ɪ z n t 'f α: 't ɔ: l ə ∂ ə n h ɪ z f α: ∂ ə || He has long since given up smoking. h i h ə z 'l ɔ ƞ s ɪ n s ' ɪ v ə n p s m o k ɪ ƞ || He hasn't long since given up smoking. h i 'h ə z n t l ɔ ƞ s ɪ n s ' ɪ v ə n p s m o k ɪ ƞ || We now describe a simple, general framework for evaluating semantic models. Our framework consists of the following components: a semantic model to be evaluated, pairs of sentences that are considered to have high similarity, and pairs of sentences that are considered to have low similarity.

88
Following Ernst -finite auxiliaries move to Tense.‖ That is, the Modal head moves to Tense when Modal is present and the Perf head moves to Tense if no Modal is present. In the following graphs, we assume that the adverb adjoins above ModalP because adjunction to Pred would imply that it is too low to have the propositional reading required in Ernst's account. One obvious difference between the two structures in the graphs is that, in (b), the adverb adjoins above Tense whereas in (a), it adjoins above Modal. Under Ernst's analysis, this structural difference does not yield a meaning difference. Several sentences expressing the relation Pfizer acquires Rinat Neuroscience are shown in Examples 1 to 3. These sentences illustrate the amount of syntactic and lexical variation that the semantic model must recognize as expressing the same semantic relation. In particular, besides recognizing synonymy or near-synonymy at the lexical level, models must also account for subcategorization differences, extra arguments or adjuncts, and part of-speech differences due to nominalization. Specifically, we do the following. 1. We define a notion of order-invariant types that extends the notion of types to the order-invariant setting and study its basic properties. 2. We show that, despite order-invariant properties not forming logic, a logic-based notion of order-invariant types can actually be useful. We provide two applications of it.
First, we provide a proof, from the basic principles, of a result by Courcelle saying that over trees, order-invariant MSO properties are the ones expressible in MSO with counting quantifiers. This was reported in our conference paper but the proof was never published.
Second, we prove an analog of the Feferman-Vaught theorem for order invariant properties, and use it to extend the list of known classes where order invariance does not increase the expressive power. While not claiming a 89 breakthrough, the goal of this note is to show that standard model-theoretic techniques are applicable in this notoriously difficult area, and perhaps offer a new avenue of attack on a host of unsolved problems related to orderinvariance. (2, 32)

Distributional Semantic Models
Semantics is the study of meaning. It investigates questions such as: What is meaning? How come words and sentences have meaning? What is the meaning of words and sentences? How can the meanings of words combine to form the meaning of sentences? Do two people mean the same thing when they utter the word ‗cat'? How do we communicate?
As a proficient language user, you don't have to think about meaning much in your everyday life, and you can effortlessly ascertain, for example, that a cat is more like a dog than a coconut. Distributional semantics is a theory of meaning which is computationally implementable and very, very good at modelling what humans do when they make similarity judgements. Here is a typical output for a distributional similarity system asked to quantify the similarity of cats, dogs and coconuts. (4, 56) If we say that meaning comes from use, we can derive a model of meaning from observable uses of language. This is what distributional semantics does, in a somewhat constrained fashion. A typical way to get an approximation of the meaning of words in distributional semantics is to look at their linguistic context in a very large corpus.
The linguistic context of a word is simply the other words that appear next to it. So for example, the following might be a context for ‗coconut': found throughout the tropic and subtropical area, theis known for its great versatility Discussions of transatlantic linguistic differences engage linguists and non-linguists alike. In the past one hundred years in particular, the topic has been taken up in numerous scholarly works, some quite recent. While some largescale works treating different varieties of English have approached the topic by giving separate descriptions of each variety, the trend in the past decade has been to select several linguistic phenomena and contrast their behaviour in different varieties. Most commonly, variables are examined in terms of Standard British English (of the UK) versus Standard American English (of the US), presumably because -there is more material available in them than in any other variety‖.
To the varieties of English in the US, the varieties of Canada are sometimes added, grouped together as -North American English‖ and treated as a unit. Brinton and Arnovick observe, -Patterns of colonization and historical developments have led to the emergence of two supranational varieties: North American English and British English. The first is the basis of US and Canadian English.‖ The close association of the two varieties is unsurprising as some of the varieties now spoken in Canada and the US have some common historical roots. In the late 1700s, many British settlers in what is now Canada migrated there from the former British colonies that became the United States after the American Revolution. The grouping of the varieties of English of Canada and the United States into a single larger category is also consistent with the linguistic research. Differences between the Standard English spoken in the United States and in Canada are described as consisting mostly of lexical variability (e.g., the Canadian use of TOQUE for a knitted hat or CHESTERFIELD for an upholstered piece of furniture seating three people) with grammatical variation between the two Standard varieties described as minimal. As Brinton and Arnovick state, -Apart from a few minor differences, little distinguishes CanE from its American counterpart grammatically.‖ McCrum concurs, saying -There is no distinctive Canadian grammar.‖ Given this grammatical similarity, there is no reason to believe that the descriptions of -American‖ English to which I return in a moment are not equally applicable to Canadian English in general.
Of course, there is also variability within Britain and within England. However, Trudgill notes that the varieties spoken in the UK by those who are outside of rural areas are very similar to Standard. It has also been noted that there are few cases of syntactic variability. Therefore, it is also reasonable to assume the York data used in this study are likely to be representational of English English more generally. Having established that the two varieties examined here (those of Toronto, Canada and York, England) represent the broader categories of North American and English English, respectively.

90
In the introduction to a volume examining variation between Standard British and Standard American English, Rohdenburg and Schlüter remark that the two varieties -should still be considered one and the same language at many levels of description, British-American contrasts are widely recognized.‖ These contrasts have been identified at every level of the grammar. There are numerous phonetic and phonological differences, as well as differences in intonational patterns. In fact, at the level of phonology, Trudgill argues that the major varieties of English continue to diverge. There are also numerous varying lexical items in categories as diverse as nouns (e.g., the BOOT of a car versus the TRUNK of a car) and conjunctions (e.g., WHILST versus WHILE). Trudgill notes that many lexical differences are disappearing, which he attributes to the globalization of media, though he states the role of the media in language change is usually negligible. In addition to categorical lexical differences between British and American varieties (e.g., TRUCK versus LORRY), there are also cases where a single word or expression exists in both varieties but with a difference of meaning. One such example is BRACES, which, in North American English, describes devices for straightening teeth (among other meanings), but in British English can also refer to a means of holding up trousers (called SUSPENDERS in North American varieties).

Conclusion:-
In the last ten years, distributional semantic models have been quite successful at addressing semantic similarity, lexical ambiguity, lexical entailment, verb selection restrictions and other word level relations. In this class of models, the meaning of a content word is represented in terms of a distributed vector recording its pattern of cooccurrences (sometimes, in specific syntactic relations) with other content words within a corpus. Different types of semantic tasks and phenomena are then modeled in terms of linear algebra operations on distributional vectors.
A central question about DSMs is whether and how distributional vectors can also be used in the compositional construction of meaning for constituents larger than words, and ultimately for sentences or discoursesthe traditional domains of denotation-based formal semantics. Being able to model key aspects of semantic composition represents a crucial condition for DSMs to provide a more general model of meaning. Conversely, distributional representations might help to model those aspects of meaning that notoriously challenge semantic compositionality, such as semantic context-sensitivity, polysemy, predicate coercion, etc.
We also observe that no single model or composition operator performs best for all tasks and datasets. The latent sense mixture model of Dinu and Lapata performs well in recognizing semantic relations in general web text. Because of the difficulty of adapting it to a specialized domain, however, it does less well in biomedical question answering, where the syntax-based model of Thater et al. performs the best. A more thorough investigation of the factors that can predict the performance and/or invariance of a given composition operator is warranted. In the future, we would like to evaluate other models of compositional semantics that have been recently proposed. We would also like to collect more comprehensive test data, to increase the external validity of our evaluations.
Some variation of the so-called distributional hypothesis -i.e. that words with similar distributional properties have similar semantic properties -lies at the heart of a number of computational approaches that share the assumption that it is possible to build semantic space models through the statistical analysis of the contexts in which words occur.
However, the very notion of context on which semantic spaces rely on gives rise to various crucial issues both at the theoretical and at the computational level, which in turn determine a large space of parametric variations. The aim of this workshop is to foster a fully cross-disciplinary debate around the major open questions pertaining to the definition and usage of context in context-based semantic modeling.

91
Three current challenges to distributional semantics take a central position in our workshop. These are the discovery of verb meaning, the modelling of different aspects of semantics and the combination of different types of data. We want to address each of these, with specific attention to the relationship between distributional models and human semantic cognition.