Millions of artists.

Your pocket.

One app.

Download now

My Browser

Quantification has several distinct senses. In mathematics and empirical science, it is the act of counting and measuring that maps human sense observations and experiences into members of some set of numbers. Quantification in this sense is fundamental to the scientific method. In logic, quantification is the binding of a variable ranging over a domain of discourse. The variable thereby becomes bound by an operator called a quantifier. Academic discussion of quantification refers more often to this meaning of the term than the preceding one. In grammar, a quantifier is a type of determiner, such as all or many, that indicates quantity. These items have been argued to correspond to logical quantifiers at the semantic level. Contents 1 Natural language, 2 Logic 2.1 Mathematics, 2.2 Notation, 2.3 Equivalent expressions, 2.4 Nesting, 2.5 Range of quantification, 2.6 Formal semantics, 2.7 Paucal, multal and other degree quantifiers, 2.8 Other quantifiers, 2.9 History, , 3 Natural science, 4 Social sciences, 5 Hard versus soft science, 6 See also, 7 References, 8 External links, Natural language: See also: Number names All known human languages make use of quantification (Wiese 2004). For example, in English: Every glass in my recent order was chipped., Some of the people standing across the river have white armbands., Most of the people I talked to didn't have a clue who the candidates were., A lot of people are smart., The words in italics are quantifiers. There exists no simple way of reformulating any one of these expressions as a conjunction or disjunction of sentences, each a simple predicate of an individual such as That wine glass was chipped. These examples also suggest that the construction of quantified expressions in natural language can be syntactically very complicated. Fortunately, for mathematical assertions, the quantification process is syntactically more straightforward. The study of quantification in natural languages is much more difficult than the corresponding problem for formal languages. This comes in part from the fact that the grammatical structure of natural language sentences may conceal the logical structure. Moreover, mathematical conventions strictly specify the range of validity for formal language quantifiers; for natural language, specifying the range of validity requires dealing with non-trivial semantic problems. For example the sentence ″Someone gets mugged in New York every 10 minutes″ does not identify whether it is the same person getting mugged every 10 minutes. Montague grammar gives a novel formal semantics of natural languages. Its proponents argue that it provides a much more natural formal rendering of natural language than the traditional treatments of Frege, Russell and Quine. Logic: See also: generalized quantifier and Lindström quantifier In language and logic, quantification is a construct that specifies the quantity of specimens in the domain of discourse that apply to (or satisfy) an open formula. For example, in arithmetic, it allows the expression of the statement that every natural number has a successor. A language element which generates a quantification is called a quantifier. The resulting expression is a quantified expression; and the expression is said to be quantified over the predicate or function expression whose free variable is bound by the quantifier. Quantification is used in both natural languages and formal languages. Examples of quantifiers in English are for all, for some, many, few, a lot, and no. In formal languages, quantification is a formula constructor that produces new formulas from old ones. The semantics of the language specifies how the constructor is interpreted as an extent of validity. The two fundamental kinds of quantification in predicate logic are universal quantification and existential quantification. The traditional symbol for the universal quantifier "all" is "∀", an inverted letter "A", and for the existential quantifier "exists" is "∃", a rotated letter "E". These quantifiers have been generalized beginning with the work of Mostowski and Lindström. Mathematics: Consider the following statement: 1·2 = 1 + 1, and 2·2 = 2 + 2, and 3 · 2 = 3 + 3, ...., and n · 2 = n + n, etc. This has the appearance of an infinite conjunction of propositions. From the point of view of formal languages this is immediately a problem, since syntax rules are expected to generate finite objects. The example above is fortunate in that there is a procedure to generate all the conjuncts. However, if an assertion were to be made about every irrational number, there would be no way to enumerate all the conjuncts, since irrationals cannot be enumerated. A succinct formulation which avoids these problems uses universal quantification: For any natural number n, n·2 = n + n. A similar analysis applies to the disjunction, 1 is equal to 5 + 5, or 2 is equal to 5 + 5, or 3 is equal to 5 + 5, ... , or n is equal to 5 + 5, etc. which can be rephrased using existential quantification: For some natural number n, n is equal to 5 + 5. It is possible to devise abstract algebras whose models include formal languages with quantification, but progress has been slow and interest in such algebra has been limited. Three approaches have been devised to date: Relation algebra, invented by De Morgan, and developed by Charles Sanders Peirce, Ernst Schröder, Tarski, and Tarski's students. Relation algebra cannot represent any formula with quantifiers nested more than three deep. Surprisingly, the models of relation algebra include the axiomatic set theory ZFC and Peano arithmetic;, Cylindric algebra, devised by Tarski, Henkin, and others;, The polyadic algebra of Paul Halmos., Notation: There are two quantifiers, the universal quantifier and the existential quantifier. The traditional symbol for the universal quantifier is "∀", an inverted letter "A", which stands for "for all" or "all". The corresponding symbol for the existential quantifier is "∃", a rotated letter "E", which stands for "there exists" or "exists". An example of quantifying an English statement would be as follows. Given the statement, "All of Peters friends either like to dance or like to go to the beach", we can identify key aspects and rewrite using symbols including quantifiers. So, let x be any one particular friend of Peter, X the set of all Peter's friends, P(x) be the predicate (mathematical logic) "x likes to dance", and lastly Q(x) the predicate "x likes to go to the beach". Then we have, : . Which is read, "for all x that's a member of X, P of x or Q of x." Some other quantified expressions are constructed as follows, for a formula P. Variant notations include, for set X and set members x: All of these variations also apply to universal quantification. Other variations for the universal quantifier are Some versions of the notation explicitly mention the range of quantification. The range of quantification must always be specified; for a given mathematical theory, this can be done in several ways: Assume a fixed domain of discourse for every quantification, as is done in Zermelo-Fraenkel set theory, Fix several domains of discourse in advance and require that each variable have a declared domain, which is the type of that variable. This is analogous to the situation in statically typed computer programming languages, where variables have declared types., Mention explicitly the range of quantification, perhaps using a symbol for the set of all objects in that domain or the type of the objects in that domain., One can use any variable as a quantified variable in place of any other, under certain restrictions in which variable capture does not occur. Even if the notation uses typed variables, variables of that type may be used. Informally or in natural language, the "∀x" or "∃x" might appear after or in the middle of P(x). Formally, however, the phrase that introduces the dummy variable is placed in front. Mathematical formulas mix symbolic expressions for quantifiers, with natural language quantifiers such as For any natural number x, .... There exists an x such that .... For at least one x. Keywords for uniqueness quantification include: For exactly one natural number x, .... There is one and only one x such that .... Further, x may be replaced by a pronoun. For example, For any natural number, its product with 2 equals to its sum with itself Some natural number is prime. Equivalent expressions: If X is a domain of x and P(x) is a predicate dependent on x, then the universal proposition is expressed in Boolean algebra terms as, which equivalently reads "if x is in X, then P(x) is true." If x is not in X, then P(x) is indeterminate. Note that the truth of the expression requires only that x be in X, so it can be any x in X, independent of P(x), whereas the falsity of the expression, or the truth of additionally requires that x be such that P(x) evaluates to false; this is the reason behind calling x a "bound variable." This last expression can thus be read as "for some x in X, P(x) is false," or "there exists an x in X such that P(x) is false." So, we now have the equivalent Boolean expression for the existential proposition:, Thus, together with negation, only one of either the universal or existential quantifier is needed to perform both tasks: which shows that to disprove a "for all x" proposition, one needs no more than to find an x for which the predicate is false. Similarly, to disprove a "there exists an x" proposition, one needs to show that the predicate is false for all x. Nesting: Consider the following statement: For any natural number , there is a natural number such that . This is clearly true; it just asserts that every natural number has a square. The meaning of the assertion in which the quantifiers are turned around is different: There is a natural number such that for any natural number , . This is clearly false; it asserts that there is a single natural number s that is at once the square of every natural number. This is because the syntax directs that any newly introduced variable cannot be a function of subsequently introduced variables. This illustrates that the order of quantifiers is critical to meaning. A less trivial example is the important concept of uniform continuity from analysis, which differs from the more familiar concept of pointwise continuity only by an exchange in the positions of two quantifiers. To illustrate this, let f be a real-valued function on R. A: Pointwise continuity of on R:, interchanging the universal quantifiers over the braces, this is the same as A': Pointwise continuity of f on R:, Thus, it is implied that the particular value chosen for δ can only be a function of ε and x, the variables that precede it; whereas in B: Uniform continuity of f on R:, by interchanging the existential and universal quantifiers over the braces in A', δ is asserted to be independent of x. Ambiguity is avoided with the quantifiers in front: A: B: C - unambiguous, there is an A such that B: C - unambiguous, there is an A such that for all B, C - unambiguous, provided that the separation between B and C is clear, there is an A such that C for all B - it is often clear that what is meant is, there is an A such that (C for all B) but it could be interpreted as (there is an A such that C) for all B there is an A such that C B -- suggests more strongly that the first is meant; this may be reinforced by the layout, for example by putting "C B" on a new line., The maximum depth of nesting of quantifiers inside a formula is called its Quantifier rank. Range of quantification: Every quantification involves one specific variable and a domain of discourse or range of quantification of that variable. The range of quantification specifies the set of values that the variable takes. In the examples above, the range of quantification is the set of natural numbers. Specification of the range of quantification allows us to express the difference between, asserting that a predicate holds for some natural number or for some real number. Expository conventions often reserve some variable names such as "n" for natural numbers and "x" for real numbers, although relying exclusively on naming conventions cannot work in general since ranges of variables can change in the course of a mathematical argument. A more natural way to restrict the domain of discourse uses guarded quantification. For example, the guarded quantification For some natural number n, n is even and n is prime means For some even number n, n is prime. In some mathematical theories a single domain of discourse fixed in advance is assumed. For example, in Zermelo-Fraenkel set theory, variables range over all sets. In this case, guarded quantifiers can be used to mimic a smaller range of quantification. Thus in the example above to express For any natural number n, n·2 = n + n in Zermelo-Fraenkel set theory, it can be said For any n, if n belongs to N, then n·2 = n + n, where N is the set of all natural numbers. Formal semantics: Mathematical Semantics is the application of mathematics to study the meaning of expressions in a formal language. It has three elements: A mathematical specification of a class of objects via syntax, a mathematical specification of various semantic domains and the relation between the two, which is usually expressed as a function from syntactic objects to semantic ones. This article only addresses the issue of how quantifier elements are interpreted. Given a model theoretical logical framework, the syntax of a formula can be given by a syntax tree. Quantifiers have scope and a variable x is free if it is not within the scope of a quantification for that variable. Thus in the occurrence of both x and y in C(y,x) is free. An interpretation for first-order predicate calculus assumes as given a domain of individuals X. A formula A whose free variables are x1, ..., xn is interpreted as a boolean-valued function F(v1, ..., vn) of n arguments, where each argument ranges over the domain X. Boolean-valued means that the function assumes one of the values T (interpreted as truth) or F (interpreted as falsehood). The interpretation of the formula is the function G of n-1 arguments such that G(v1, ...,vn-1) = T if and only if F(v1, ..., vn-1, w) = T for every w in X. If F(v1, ..., vn-1, w) = F for at least one value of w, then G(v1, ...,vn-1) = F. Similarly the interpretation of the formula is the function H of n-1 arguments such that H(v1, ...,vn-1) = T if and only if F(v1, ...,vn-1, w) = T for at least one w and H(v1, ..., vn-1) = F otherwise. The semantics for uniqueness quantification requires first-order predicate calculus with equality. This means there is given a distinguished two-placed predicate "="; the semantics is also modified accordingly so that "=" is always interpreted as the two-place equality relation on X. The interpretation of then is the function of n-1 arguments, which is the logical and of the interpretations of Paucal, multal and other degree quantifiers: See also: Fubini's theorem and measurable None of the quantifiers previously discussed apply to a quantification such as There are many integers n < 100, such that n is divisible by 2 or 3 or 5. One possible interpretation mechanism can be obtained as follows: Suppose that in addition to a semantic domain X, we have given a probability measure P defined on X and cutoff numbers 0 < a ≤ b ≤ 1. If A is a formula with free variables x1,...,xn whose interpretation is the function F of variables v1,...,vn then the interpretation of is the function of v1,...,vn-1 which is T if and only if and F otherwise. Similarly, the interpretation of is the function of v1,...,vn-1 which is F if and only if and T otherwise. Other quantifiers: A few other quantifiers have been proposed over time. In particular, the solution quantifier, noted § (section sign) and read "those". For example: is read "those n in N such that n ≤ 4 are in {0,1,2}." The same construct is expressible in set-builder notation: History: Term logic treats quantification in a manner that is closer to natural language, and also less suited to formal analysis. Aristotelian logic treated All, Some and No in the 1st century BC, in an account also touching on the alethic modalities. Gottlob Frege, in his 1879 Begriffsschrift, was the first to employ a quantifier to bind a variable ranging over a domain of discourse and appearing in predicates. He would universally quantify a variable (or relation) by writing the variable over a dimple in an otherwise straight line appearing in his diagrammatic formulas. Frege did not devise an explicit notation for existential quantification, instead employing his equivalent of ~∀x~, or contraposition. Frege's treatment of quantification went largely unremarked until Bertrand Russell's 1903 Principles of Mathematics. In work that culminated in Peirce (1885), Charles Sanders Peirce and his student Oscar Howard Mitchell independently invented universal and existential quantifiers, and bound variables. Peirce and Mitchell wrote Πx and Σx where we now write ∀x and ∃x. Peirce's notation can be found in the writings of Ernst Schröder, Leopold Loewenheim, Thoralf Skolem, and Polish logicians into the 1950s. Most notably, it is the notation of Kurt Gödel's landmark 1930 paper on the completeness of first-order logic, and 1931 paper on the incompleteness of Peano arithmetic. Peirce's approach to quantification also influenced William Ernest Johnson and Giuseppe Peano, who invented yet another notation, namely (x) for the universal quantification of x and (in 1897) ∃x for the existential quantification of x. Hence for decades, the canonical notation in philosophy and mathematical logic was (x)P to express "all individuals in the domain of discourse have the property P," and "(∃x)P" for "there exists at least one individual in the domain of discourse having the property P." Peano, who was much better known than Peirce, in effect diffused the latter's thinking throughout Europe. Peano's notation was adopted by the Principia Mathematica of Whitehead and Russell, Quine, and Alonzo Church. In 1935, Gentzen introduced the ∀ symbol, by analogy with Peano's ∃ symbol. ∀ did not become canonical until the 1960s. Around 1895, Peirce began developing his existential graphs, whose variables can be seen as tacitly quantified. Whether the shallowest instance of a variable is even or odd determines whether that variable's quantification is universal or existential. (Shallowness is the contrary of depth, which is determined by the nesting of negations.) Peirce's graphical logic has attracted some attention in recent years by those researching heterogeneous reasoning and diagrammatic inference. Natural science: Some measure of the undisputed general importance of quantification in the natural sciences can be gleaned from the following comments: "these are mere facts, but they are quantitative facts and the basis of science." It seems to be held as universally true that "the foundation of quantification is measurement." There is little doubt that "quantification provided a basis for the objectivity of science." The context of these quotes is slightly misleading - most scientists use quantification as related to quantities, which is categorically different than quantifier as used in this article. Most would probably article that the word is Qualifier. In ancient times, "musicians and artists...rejected quantification, but merchants, by definition, quantified their affairs, in order to survive, made them visible on parchment and paper." Any reasonable "comparison between Aristotle and Galileo shows clearly that there can be no unique lawfulness discovered without detailed quantification." Even today, "universities use imperfect instruments called 'exams' to indirectly quantify something they call knowledge." This meaning of quantification comes under the heading of pragmatics. In some instances in the natural sciences a seemingly intangible concept may be quantified by creating a scale--for example, a pain scale in medical research, or a discomfort scale at the intersection of meteorology and human physiology such as the heat index measuring the combined perceived effect of heat and humidity, or the wind chill factor measuring the combined perceived effects of cold and wind. Social sciences: See also: Society for Quantitative Analysis of Behavior and Quantitative psychological research In the social sciences, quantification is an integral part of economics and psychology. Both disciplines gather data--economics by empirical observation and psychology by experimentation, and both use statistical techniques such as regression analysis to draw conclusions from it. In some instances a seemingly intangible property may be quantified by asking subjects to rate something on a scale--for example, a happiness scale or a quality of life scale--or by the construction of a scale by the researcher, as with the index of economic freedom. In other cases, an unobservable variable may be quantified by replacing it with a proxy variable with which it is highly correlated--for example, per capita gross domestic product is often used as a proxy for standard of living or quality of life. Frequently in the use of regression, the presence or absence of a trait is quantified by employing a dummy variable, which takes on the value 1 in the presence of the trait or the value 0 in the absence of the trait. Quantitative linguistics is an area of linguistics that relies on quantification. For example, indices of grammaticalization of morphemes, such as phonological shortness, dependence on surroundings, and fusion with the verb, have been developed and found to be significantly correlated across languages with stage of evolution of function of the morpheme. Hard versus soft science: The ease of quantification is one of the features used to distinguish hard and soft sciences from each other. Hard sciences are often considered to be more scientific, rigorous, or accurate. In some social sciences such as sociology, specific accurate data are difficult to obtain, either because laboratory conditions are not present or because the issues involved are conceptual but not directly quantifiable.Source: WikipediaText from this biography licensed under creative commons license

See All