For Moore, philosophy was first and foremost analysis. The philosophical task involves clarifying puzzling propositions or concepts by indicating fewer puzzling propositions or concepts to which the originals are held to be logically equivalent. Once this task has been completed, the truth or falsity of problematic philosophical assertions can be determined more adequately. Moore was noted for his careful analyses of such puzzling philosophical claims as 'time is unreal', analyses that aided of determining the truth of such assertions.
Russell, strongly influenced by the precision of mathematics, was concerned with developing an ideal logical language that would accurately reflect the nature of the world. Complex propositions, Russell maintained, can be resolved into their simplest components, which he called atomic propositions. These propositions refer to atomic facts, the ultimate constituents of the universe. The metaphysical view based on this logical analysis of language and the insistence that meaningful propositions must correspond to facts constitutes what Russell called logical atomism. His interest in the structure of language also led him to distinguish between the grammatical form of a proposition and its logical form. The statements 'John is good' and 'John is tall' have the same grammatical form but different logical forms. Failure to recognize this would lead one to treat the property 'goodness' as if it were a characteristic of John in the same way that the property 'tallness' is a characteristic of John. Such failure results in philosophical confusion.
Austrian-born philosopher Ludwig Wittgenstein was one of the most influential thinkers of the 20th century. With his fundamental work, Tractatus Logico-philosophicus, published in 1921, he became a central figure in the movement known as analytic and linguistic philosophy.
Russell's work of mathematics attracted towards studying in Cambridge the Austrian philosopher Ludwig Wittgenstein, who became a central figure in the analytic and linguistic movement. In his first major work, Tractatus Logico-Philosophicus (1921; translation 1922), in which he first presented his theory of language, Wittgenstein argued that 'all philosophy is a 'critique of language' and that 'philosophy aims at the logical clarification of thoughts'. The results of Wittgenstein's analysis resembled Russell's logical atomism. The world, he argued, is ultimately composed of simple facts, which it is the purpose of language to picture. To be meaningful, statements about the world must be reducible to linguistic utterances that have a structure similar to the simple facts pictured. In this early Wittgensteinian analysis, only propositions that picture facts - the propositions of science - are considered factually meaningful. Metaphysical, theological, and ethical sentences were judged to be factually meaningless.
Influenced by Russell, Wittgenstein, Ernst Mach, and others, a group of philosophers and mathematicians in Vienna in the 1920s initiated the movement known as logical positivism: Led by Moritz Schlick and Rudolf Carnap, the Vienna Circle initiated one of the most important chapters in the history of analytic and linguistic philosophy. According to the positivists, the task of philosophy is the clarification of meaning, not the discovery of new facts (the job of the scientists) or the construction of comprehensive accounts of reality (the misguided pursuit of traditional metaphysics).
The positivists divided all meaningful assertions into two classes: analytic propositions and empirically verifiable ones. Analytic propositions, which include the propositions of logic and mathematics, are statements the truth or falsity of which depend altogether on the meanings of the terms constituting the statement. An example would be the proposition 'two plus two equals four'. The second class of meaningful propositions includes all statements about the world that can be verified, at least in principle, by sense experience. Indeed, the meaning of such propositions is identified with the empirical method of their verification. This verifiability theory of meaning, the positivists concluded, would demonstrate that scientific statements are legitimate factual claims and that metaphysical, religious, and ethical sentences are factually dwindling. The ideas of logical positivism were made popular in England by the publication of A. J. Ayer's Language, Truth and Logic in 1936.
The positivists' verifiability theory of meaning came under intense criticism by philosophers such as the Austrian-born British philosopher Karl Popper. Eventually this narrow theory of meaning yielded to a broader understanding of the nature of language. Again, an influential figure was Wittgenstein. Repudiating many of his earlier conclusions in the Tractatus, he initiated a new line of thought culminating in his posthumously published Philosophical Investigations (1953, translated 1953). In this work, Wittgenstein argued that once attention is directed to the way language is actually used in ordinary discourse, the variety and flexibility of language become clear. Propositions do much more than simply picture facts.
This recognition led to Wittgenstein's influential concept of language games. The scientist, the poet, and the theologian, for example, are involved in different language games. Moreover, the meaning of a proposition must be understood in its context, that is, in terms of the rules of the language game of which that proposition is a part. Philosophy, concluded Wittgenstein, is an attempt to resolve problems that arise as the result of linguistic confusion, and the key to the resolution of such problems is ordinary language analysis and the proper use of language.
Additional contributions within the analytic and linguistic movement include the work of the British philosopher's Gilbert Ryle, John Austin, and P. F. Strawson and the American philosopher W. V. Quine. According to Ryle, the task of philosophy is to restate 'systematically misleading expressions' in forms that are logically more accurate. He was particularly concerned with statements the grammatical form of which suggests the existence of nonexistent objects. For example, Ryle is best known for his analysis of mentalistic language, language that misleadingly suggests that the mind is an entity in the same way as the body.
Austin maintained that one of the most fruitful starting points for philosophical inquiry is attention to the extremely fine distinctions drawn in ordinary language. His analysis of language eventually led to a general theory of speech acts, that is, to a description of the variety of activities that an individual may be performing when something is uttered.
Strawson is known for his analysis of the relationship between formal logic and ordinary language. The complexity of the latter, he argued, is inadequately represented by formal logic. A variety of analytic tools, therefore, are needed in addition to logic in analysing ordinary language.
Quine discussed the relationship between language and ontology. He argued that language systems tend to commit their users to the existence of certain things. For Quine, the justification for speaking one way rather than another is a thoroughly pragmatic one.
The commitment to language analysis as a way of pursuing philosophy has continued as a significant contemporary dimension in philosophy. A division also continues to exist between those who prefer to work with the precision and rigour of symbolic logical systems and those who prefer to analyse ordinary language. Although few contemporary philosophers maintain that all philosophical problems are linguistic, the view continues to be widely held that attention to the logical structure of language and to how language is used in everyday discourse can many a time have an eye to aid in anatomize Philosophical problems.
A loose title for various philosophies that emphasize certain common themes, the individual, the experience of choice, and the absence of rational understanding of the universe, with the additional ways of addition seems a consternation of dismay or one fear, or the other extreme, as far apart is the sense of the dottiness of 'absurdity in human life', however, existentialism is a philosophical movement or tendency, emphasizing individual existence, freedom, and choice, that influenced many diverse writers in the 19th and 20th centuries.
Because of the diversity of positions associated with existentialism, the term is impossible to define precisely. Certain themes common to virtually all existentialist writers can, however, be identified. The term itself suggests one major theme: the stress on concrete individual existence and, consequently, on subjectivity, individual freedom, and choice.
Most philosophers since Plato have held that the highest ethical good are the same for everyone; Insofar as one approaches moral perfection, one resembles other morally perfect individuals. The 19th-century Danish philosopher Søren Kierkegaard, who was the first writer to call himself existential, reacted against this tradition by insisting that the highest good for the individual are to find his or her own unique vocation. As he wrote in his journal, 'I must find a truth that is true for me . . . the idea for which I can live or die'. Other existentialist writers have echoed Kierkegaard's belief that one must choose one's own way without the aid of universal, objective standards. Against the traditional view that moral choice involves an objective judgment of right and wrong, existentialists have argued that no objective, rational basis can be found for moral decisions. The 19th-century German philosopher Friedrich Nietzsche further contended that the individual must decide which situations are to count as moral situations.
One of the most controversial works of 19th-century philosophy, Thus Spake Zarathustra (1883-1885), which was articulated by the German philosopher Friedrich Nietzsche's theory of the Übermensch, a term translated as "Superman" or "Overman." The Superman was an individual who overcame what Nietzsche termed the 'slave morality' of traditional values, and lived according to his own morality. Nietzsche also advanced his idea that 'God is dead', or that traditional morality was no longer relevant in people's lives. In this passage, the sage Zarathustra came down from the mountain where he had spent the last ten years alone to preach to the people.
Nietzsche, who was not acquainted with the work of Kierkegaard, influenced subsequent existentialist thought through his criticism of traditional metaphysical and moral assumptions and through his espousal of tragic pessimism and the life-affirming individual will that opposes itself to the moral conformity of the majority. In contrast to Kierkegaard, whose attack on conventional morality led him to advocate a radically individualistic Christianity, Nietzsche proclaimed the "death of God" and went on to reject the entire Judeo-Christian moral tradition in favour of a heroic pagan ideal.
The modern philosophy movements of phenomenology and existentialism have been greatly influenced by the thought of German philosopher Martin Heidegger. According to Heidegger, humankind has fallen into a crisis by taking a narrow, technological approach to the world and by ignoring the larger question of existence. People, if they wish to live authentically, must broaden their perspectives. Instead of taking their existence for granted, people should view themselves as part of being (Heidegger's term for that which underlies all existence).
Heidegger, like Pascal and Kierkegaard, reacted against an attempt to put philosophy on a conclusive rationalistic basis - in this case the phenomenology of the 20th-century German philosopher Edmund Husserl. Heidegger argued that humanity finds itself in an incomprehensible, indifferent world. Human beings can never hope to understand why they are here; instead, each individual must choose a goal and follow it with passionate conviction, aware of the certainty of death and the ultimate meaninglessness of one's life. Heidegger contributed to existentialist thought an original emphasis on being and ontology as well as on language.
Twentieth-century French intellectual Jean-Paul Sartre helped to develop existential philosophy through his writings, novels, and plays. Much did of Sartre's works focuses on the dilemma of choice faced by free individuals and on the challenge of creating meaning by acting responsibly in an indifferent world. In stating that 'man is condemned to be free', Sartre reminds us of the responsibility that accompanies human decisions.
Sartre first gave the term existentialism general currency by using it for his own philosophy and by becoming the leading figure of a distinct movement in France that became internationally influential after World War II. Sartre's philosophy is explicitly atheistic and pessimistic; he declared that human beings require a rational basis for their lives but are unable to achieve one and thus human life is a 'futile passion'. Sartre nevertheless insisted that his existentialism is a form of humanism, and he strongly emphasized human freedom, choice, and responsibility. He eventually tried to reconcile these existentialist concepts with a Marxist analysis of society and history.
Although existentialist thought encompasses the uncompromising atheism of Nietzsche and Sartre and the agnosticism of Heidegger, its origin in the intensely religious philosophies of Pascal and Kierkegaard foreshadowed its profound influence on a 20th-century theology. The 20th-century German philosopher Karl Jaspers, although he rejected explicit religious doctrines, influenced contemporary theologies through his preoccupation with transcendence and the limits of human experience. The German Protestant theologian's Paul Tillich and Rudolf Bultmann, the French Roman Catholic theologian Gabriel Marcel, the Russian Orthodox philosopher Nikolay Berdyayev, and the German Jewish philosopher Martin Buber inherited many of Kierkegaard's concerns, especially that a personal sense of authenticity and commitment is essential to religious faith.
Renowned as one of the most important writers in world history, 19th-century Russian author Fyodor Dostoyevsky wrote psychologically intense novels which probed the motivations and moral justifications for his characters' actions. Dostoyevsky commonly addressed themes such as the struggle between good and evil within the human soul and the idea of salvation through suffering. The Brothers Karamazov (1879-1880), generally considered Dostoyevsky's best work, interlaces religious exploration with the story of a family's violent quarrels over a woman and a disputed inheritance.
A number of existentialist philosophers used literary forms to convey their thought, and existentialism has been as vital and as extensive a movement in literature as in philosophy. The 19th-century Russian novelist Fyodor Dostoyevsky is probably the greatest existentialist literary figure. In Notes from the Underground (1864), the alienated antihero rages against the optimistic assumptions of rationalist humanism. The view of human nature that emerges in this and other novels of Dostoyevsky is that it is unpredictable and perversely self-destructive; only Christian love can save humanity from itself, but such love cannot be understood philosophically. As the character Alyosha says in The Brothers Karamazov (1879-80), "We must love life more than the meaning of it."
The opening series of arranged passages in continuous or uniform order, by ways that the progressive course accommodates to arrange in a line or lines of continuity, Wherefore, the Russian novelist Fyodor Dostoyevsky's Notes from Underground (1864) - 'I am a sick man . . . I am a spiteful man' - are among the most famous in 19th-century literature. Published five years after his release from prison and involuntary, military service in Siberia, Notes from Underground is a sign of Dostoyevsky's rejection of the radical social thinking he had embraced in his youth. The unnamed narrator is antagonistic in tone, questioning the reader's sense of morality as well as the foundations of rational thinking. In this excerpt from the beginning of the novel, the narrator describes himself, derisively referring to himself as an 'overly conscious' intellectual.
In the 20th century, the novels of the Austrian Jewish writer Franz Kafka, such as The Trial (1925 translations, 1937) and The Castle (1926 translations, 1930), presents isolated men confronting vast, elusive, menacing bureaucracies; Kafka's themes of anxiety, guilt, and solitude reflect the influence of Kierkegaard, Dostoyevsky, and Nietzsche. The influence of Nietzsche is also discernible in the novels of the French writer's André Malraux and in the plays of Sartre. The work of the French writer Albert Camus is usually associated with existentialism because of the prominence in it of such themes as the apparent absurdity and futility of life, the indifference of the universe, and the necessity of engagement in a just cause. In the United States, the influence of existentialism on literature has been more indirect and diffuse, but traces of Kierkegaard's thought can be found in the novels of Walker Percy and John Updike, and various existentialist themes are apparent in the work of such diverse writers as Norman Mailer and John Barth.
The problem of defining knowledge in terms of true belief plus some favoured relation between the believer and the facts began with Plato's view in the Theaetetus, that knowledge is true belief plus some logos, and epistemology is a beginning for which it is bound to the foundations of knowledge, a special branch of philosophy that addresses the philosophical problems surrounding the theory of knowledge. Epistemology is concerned with the definition of knowledge and related concepts, the sources and criteria of knowledge, the kinds of knowledge possible and the degree to which each is certain, and the exact integrations among the one's who are understandably of knowing and the object known.
Thirteenth-century Italian philosopher and theologian Saint Thomas Aquinas attempted to synthesize Christian belief with a broad range of human knowledge, embracing diverse sources such as Greek philosopher Aristotle and Islamic and Jewish scholars. His thought exerted lasting influence on the development of Christian theology and Western philosophy. And explicated by the author, Anthony Kenny who examines the complexities of Aquinas's concepts of substance and accident.
In the 5th century Bc, the Greek Sophists questioned the possibility of reliable and objective knowledge. Thus, a leading Sophist, Gorgias, argued that nothing really exists, that if anything did exist it could not be known, and that if knowledge were possible, it could not be communicated. Another prominent Sophist, Protagoras, maintained that no person's opinions can be said to be more correct than another's, because each is the sole judge of his or her own experience. Plato, following his illustrious teacher Socrates, tried to answer the Sophists by postulating the existence of a world of unchanging and invisible forms, or ideas, about which it is possible to have exact and certain knowledge. The thing's one sees and touches, they maintained, are imperfect copies of the pure forms studied in mathematics and philosophy. Accordingly, only the abstract reasoning of these disciplines yields genuine knowledge, whereas reliance on sense perception produces vague and inconsistent opinions. They concluded that philosophical contemplation of the unseen world of forms is the highest goal of human life.
Aristotle followed Plato in regarding abstract knowledge as superior to any other, but disagreed with him as to the proper method of achieving it. Aristotle maintained that almost all knowledge is derived from experience. Knowledge is gained either directly, by abstracting the defining traits of a species, or indirectly, by deducing new facts from those already known, in accordance with the rules of logic. Careful observation and strict adherence to the rules of logic, which were first set down in systematic form by Aristotle, would help guard against the pitfalls the Sophists had exposed. The Stoic and Epicurean schools agreed with Aristotle that knowledge originates in sense perception, but against both Aristotle and Plato they maintained that philosophy is to be valued as a practical guide to life, rather than as an end in itself.
After many centuries of declining interest in rational and scientific knowledge, the Scholastic philosopher Saint Thomas Aquinas and other philosophers of the middle Ages helped to restore confidence in reason and experience, blending rational methods with faith into a unified system of beliefs. Aquinas followed Aristotle in regarding perception as the starting point and logic as the intellectual procedure for arriving at reliable knowledge of nature, but he considered faith in scriptural authority as the main source of religious belief.
From the 17th to the late 19th century, the main issue in epistemology was reasoning versus sense perception in acquiring knowledge. For the rationalists, of whom the French philosopher René Descartes, the Dutch philosopher Baruch Spinoza, and the German philosopher Gottfried Wilhelm Leibniz were the leaders, the main source and final test of knowledge was deductive reasoning based on self-evident principles, or axioms. For the empiricists, beginning with the English philosophers Francis Bacon and John Locke, the main source and final test of knowledge was sense perception.
Bacon inaugurated the new era of modern science by criticizing the medieval reliance on tradition and authority and also by setting down new rules of scientific method, including the first set of rules of inductive logic ever formulated. Locke attacked the rationalist belief that the principles of knowledge are intuitively self-evident, arguing that all knowledge is derived from experience, either from experience of the external world, which stamps sensations on the mind, or from internal experience, in which the mind reflects on its own activities. Human knowledge of external physical objects, he claimed, is always subject to the errors of the senses, and he concluded that one cannot have absolutely certain knowledge of the physical world.
Irish-born philosopher and clergyman George Berkeley (1685-1753) argued that of everything a human being conceived of exists, as an idea in a mind, a philosophical focus which is known as idealism. Berkeley reasoned that because one cannot control one's thoughts, they must come directly from a larger mind: that of God. In this excerpt from his Treatise Concerning the Principles of Human Knowledge, written in 1710, Berkeley explained why he believed that it is 'impossible . . . that there should be any such thing as an outward object'.
The Irish philosopher George Berkeley acknowledged along with Locke, that knowledge occurs through ideas, but he denied Locke's belief that a distinction can appear between ideas and objects. The British philosopher David Hume continued the empiricist tradition, but he did not accept Berkeley's conclusion that knowledge was of ideas only. He divided all knowledge into two kinds: Knowledge of relations of ideas - that is, the knowledge found in mathematics and logic, which is exact and certain but provide no information about the world. Knowledge of matters of fact - that is, the knowledge derived from sense perception. Hume argued that most knowledge of matters of fact depends upon cause and effect, and since no logical connection exists between any given cause and its effect, one cannot hope to know any future matter of fact with certainty. Thus, the most reliable laws of science might not remain true - a conclusion that had a revolutionary impact on philosophy.
The German philosopher Immanuel Kant tried to solve the crisis precipitated by Locke and brought to a climax by Hume; His proposed solution combined elements of rationalism with elements of empiricism. He agreed with the rationalists, one can have exact and certain knowledge, but he followed the empiricists in holding that such knowledge is more informative. Adding upon a proposed structure of thought than about the world outside of thought, and distinguishing upon three kinds of knowledge: Analytical deduction, which is exact and certain but uninformative, because it makes clear only what is contained in definitions; synthetic a posterior, which conveys information about the world learned from experience, but is subject to the errors of the senses; and synthetic a priori, which is discovered by pure intuition and is both exact and certain, for it expresses the necessary conditions that the mind imposes on all objects of experience. Mathematics and philosophy, according to Kant, provide this last. Since the time of Kant, one of the most frequently argued questions in philosophy has been whether or not such a thing as synthetic a priori knowledge really exists.
During the 19th century, the German philosopher Georg Wilhelm Friedrich Hegel revived the rationalist claim that absolutely certain knowledge of reality can be obtained by equating the processes of thought, of nature, and of history. Hegel inspired an interest in history and a historical approach to knowledge that was further emphasized by Herbert Spencer in Britain and by the German school of historicisms. Spencer and the French philosopher Auguste Comte brought attention to the importance of sociology as a branch of knowledge and both extended the principles of empiricism to the study of society.
The American school of pragmatism, founded by the philosophers Charles Sanders Peirce, William James, and John Dewey at the turn of this century, carried empiricism further by maintaining that knowledge is an instrument of action and that all beliefs should be judged by their usefulness as rules for predicting experiences.
In the early 20th century, epistemological problems were discussed thoroughly, and subtle shades of difference grew into rival schools of thought. Special attention was given to the relation between the act of perceiving something, the object directly perceived, and the thing that can be said to be known as a result of the perception. The phenomena lists contended that the objects of knowledge are the same as the objects perceived. The neutralists argued that one has direct perceptions of physical objects or parts of physical objects, rather than of one's own mental states. The critical realists took a middle position, holding that although one perceives only sensory data such as colours and sounds, these stand for physical objects and provide knowledge thereof.
A method for dealing with the problem of clarifying the relation between the act of knowing and the object known was developed by the German philosopher Edmund Husserl. He outlined an elaborate procedure that he called phenomenology, by which one is said to be able to distinguish the way things appear to be from the way one thinks they really are, thus gaining a more precise understanding of the conceptual foundations of knowledge.
During the second quarter of the 20th century, two schools of thought emerged, each indebted to the Austrian philosopher Ludwig Wittgenstein. The first of these schools, logical empiricism, or logical positivism, had its origins in Vienna, Austria, but it soon spread to England and the United States. The logical empiricists insisted that there is only one kind of knowledge: scientific knowledge; that any valid knowledge claim must be verifiable in experience; and hence that much that had passed for philosophy was neither true nor false but literally meaningless. Finally, following Hume and Kant, a clear distinction must be maintained between analytic and synthetic statements. The so-called verifiability criterion of meaning has undergone changes as a result of discussions among the logical empiricists themselves, as well as their critics, but has not been discarded. More recently, the sharp distinction between the analytic and the synthetic has been attacked by a number of philosophers, chiefly by American philosopher W.V.O. Quine, whose overall approach is in the pragmatic tradition.
The latter of these recent schools of thought, generally referred to as linguistic analysis, or ordinary language philosophy, seem to break with traditional epistemology. The linguistic analysts undertake to examine the actual way key epistemological terms are used - terms such as knowledge, perception, and probability - and to formulate definitive rules for their use in order to avoid verbal confusion. British philosopher John Langshaw Austin argued, for example, that to say a statement was true, and add nothing to the statement except a promise by the speaker or writer. Austin does not consider truth a quality or property attaching to statements or utterances. However, the ruling thought is that it is only through a correct appreciation of the role and point of this language is that we can come to a better conceptual understanding of what the language is about, and avoid the oversimplifications and distortion we are apt to bring to its subject matter.
Linguistics is the scientific study of language. It encompasses the description of languages, the study of their origin, and the analysis of how children acquire language and how people learn languages other than their own. Linguistics is also concerned with relationships between languages and with the ways languages change over time. Linguists may study language as a thought process and seek a theory that accounts for the universal human capacity to produce and understand language. Some linguists examine language within a cultural context. By observing talk, they try to determine what a person needs to know in order to speak appropriately in different settings, such as the workplace, among friends, or among family. Other linguists focus on what happens when speakers from different language and cultural backgrounds interact. Linguists may also concentrate on how to help people learn another language, using what they know about the learner's first language and about the language being acquired.
Although there are many ways of studying language, most approaches belong to one of the two main branches of linguistics: descriptive linguistics and comparative linguistics.
Descriptive linguistics is the study and analysis of spoken language. The techniques of descriptive linguistics were devised by German American anthropologist Franz Boas and American linguist and anthropologist Edward Sapir in the early 1900s to record and analyse Native American languages. Descriptive linguistics begins with what a linguist hears native speakers say. By listening to native speakers, the linguist gathered a body of data and analyses' it in order to identify distinctive sounds, called phonemes. Individual phonemes, such as /p/ and /b/, are established on the grounds that substitution of one for the other changes the meaning of a word. After identifying the entire inventory of sounds in a language, the linguist looks at how these sounds combine to create morphemes, or units of sound that carry meaning, such as the words push and bush. Morphemes may be individual words such as push; root words, such as the berry in a blueberry; or prefixes (pre- in preview) and suffixes (-ness in openness).
The linguist's next step is to see how morphemes combine into sentences, obeying both the dictionary meaning of the morpheme and the grammatical rules of the sentence. In the sentence "She pushed the bush," the morpheme she, a pronoun, is the subject 'push', a transitive verb, is the verb 'the', a definite article, is the determiner, and bush, a noun, is the object. Knowing the function of the morphemes in the sentence enables the linguist to describe the grammar of the language. The scientific procedures of phonemics (finding phonemes), morphology (discovering morphemes), and syntax (describing the order of morphemes and their function) provides descriptive linguists with a way to write down grammars of languages never before written down or analysed. In this way they can begin to study and understand these languages.
Comparative linguistics is the study and analysis, by means of written records, of the origins and relatedness of different languages. In 1786 Sir William Jones, a British scholar, asserted that Sanskrit, Greek, and Latins were related to each other and had descended from a common source. He based this assertion on observations of similarities in sounds and meanings among the three languages. For example, the Sanskrit word borate for "brother" resembles the Latin word frater, the Greek word phrater, (and the English word brother).
Other scholars went on to compare Icelandic with Scandinavian languages, and Germanic languages with Sanskrit, Greek, and Latin. The correspondences among languages, known as genetic relationships, came to be represented on what comparative linguists refer to as family trees. Family trees established by comparative linguists include the Indo-European, relating Sanskrit, Greek, Latin, German, English, and other Asian and European languages; the Algonquian, relating Fox, Cree, Menomini, Ojibwa, and other Native North American languages; and the Bantu, relating Swahili, Xhosa, Zulu, Kikuyu, and other African languages.
Comparative linguists also look for similarities in the way words are formed in different languages. Latin and English, for example, change the form of a word to express different meanings, as when the English verbs 'go', changes too, 'went' and 'gone' to express a past action. Chinese, on the other hand, has no such inflected forms; the verb remains the same while other words indicate the time (as in "go store tomorrow"). In Swahili, prefixes, suffixes, and infixes (additions in the body of the word) combine with a root word to change its meaning. For example, a single word might be express when something was done, by whom, to whom, and in what manner.
Some comparative linguists reconstruct hypothetical ancestral languages known as proto-languages, which they use to demonstrate relatedness among contemporary languages. A proto-language is not intended to depict a real language, however, and does not represent the speech of ancestors of people speaking modern languages. Unfortunately, some groups have mistakenly used such reconstructions in efforts to demonstrate the ancestral homeland of people.
Comparative linguists have suggested that certain basic words in a language do not change over time, because people are reluctant to introduce new words for such constants as arm, eye, or mother. These words are termed culture free. By comparing lists of culture-free words in languages within a family, linguists can derive the percentage of related words and use a formula to figure out when the languages separated from one another.
By the 1960s comparativists were no longer satisfied with focussing on origins, migrations, and the family tree method. They challenged as unrealistic the notion that an earlier language could remain sufficiently isolated for other languages to be derived exclusively from it over a period of time. Today comparativists seek to understand the more complicated reality of language history, taking language contact into account. They are concerned with universal characteristics of language and with comparisons of grammars and structures.
The field of linguistics, which lends from its own theories and methods into other disciplines, and many subfields of linguistics have expanded our understanding of languages. Linguistic theories and methods are also used in other fields of study. These overlapping interests have led to the creation of several cross-disciplinary fields.
Sociolinguistic study of patterns and variations in language within a society or community. It focuses on the way people use language to express social class, group status, gender, or ethnicity, and it looks at how they make choices about the form of language they use. It also examines the way people use language to negotiate their role in society and to achieve positions of power. For example, sociolinguistic studies have found that the way a New Yorker pronounces the phoneme /r/ in an expression such as "fourth floor" can indicate the person's social class. According to one study, people aspiring to move from the lower middle class to the upper middle class attach prestige to pronouncing /r/. Sometimes they even overcorrect their speech, pronouncing /r/ where those whom they wish to copy may not.
Some Sociolinguists believe that analysing such variables as the use of a particular phoneme can predict the direction of language change. Change, they say, moves toward the variable associated with power, prestige, or other quality having high social value. Other Sociolinguists focus on what happens when speakers of different languages interact. This approach to language change emphasizes the way languages mix rather than the direction of change within a community. The goal of a Sociolinguistical understanding, perhaps, takes a position whereby a communicative competence - what people need to know to use the appropriate language for a given social setting.
Psycholinguistics merge the fields of psychology and linguistics to study how people process language and how language use is related to underlying mental processes. Studies of children's language acquisition and of second-language acquisition are psycholinguistic in nature. Psycholinguists work to develop models for how language is processed and understood, using evidence from studies of what happens when these processes go awry. They also study language disorders such as aphasia (impairment of the ability to use or comprehend words) and dyslexia (impairment of the ability to make out written language).
Computational linguistics involves the use of computers to compile linguistic data, analyse languages, translate from one language to another, and develop and test models of language processing. Linguists use computers and large samples of actual language to analyse the relatedness and the structure of languages and to look for patterns and similarities. Computers also assist in stylistic studies, information retrieval, various forms of textual analysis, and the construction of dictionaries and concordances. Applying computers to language studies has resulted in a machine translated systems and machines that recognize and produce speech and text. Such machines facilitate communication with humans, including those who are perceptually or linguistically impaired.
Applied linguistics employs linguistic theory and methods in teaching and in research on learning a second language. Linguists look at the errors people make as they learn another language and at their strategies for communicating in the new language at different degrees of competence. In seeking to understand what happens in the mind of the learner, applied linguists recognize that motivation, attitude, learning style, and personality affect how well a person learns another language.
Anthropological linguistics, also known as linguistic anthropology, uses linguistic approaches to analyse culture. Anthropological linguists examine the relationship between a culture and its language. The way cultures and languages have moderately changed uninterruptedly through intermittent intervals of time. And how various cultures and languages are related to each other, for example, the present English usage of family and given names arose in the late 13th and early 14th centuries when the laws concerning registration, tenure, and inheritance of property were changed.
Once linguists began to study language as a set of abstract rules that somehow account for speech, other scholars began to take an interest in the field. They drew analogies between language and other forms of human behaviour, based on the belief that a shared structure underlies many aspects of a culture. Anthropologists, for example, became interested in a structuralist approach to the interpretation of kinship systems and analysis of myth and religion. American linguist Leonard Bloomfield promoted structuralism in the United States.
Saussure's ideas also influenced European linguistics, most notably in France and Czechoslovakia (now the Czech Republic). In 1926 Czech linguist Vilem Mathesius founded the Linguistic Circle of Prague, a group that expanded the focus of the field to include the context of language use. The Prague circle developed the field of phonology, or the study of sounds, and demonstrated that universal features of sounds in the languages of the world interrelate in a systematic way. Linguistic analysis, they said, should focus on the distinctiveness of sounds rather than on the ways they combine. Where descriptivist tried to locate and describe individual phonemes, such as /b/ and /p/, the Prague linguists stressed the features of these phonemes and their interrelationships in different languages. In English, for example, the voice distinguishes between the similar sounds of /b/ and /p/, but these are not distinct phonemes in a number of other languages. An Arabic speaker might pronounce the cities Pompeii and Bombay the same way.
As linguistics developed in the 20th century, the notion became prevalent that language is more than speech - specifically, that it is an abstract system of interrelationships shared by members of a speech community. Structural linguistics led linguists to look at the rules and the patterns of behaviour shared by such communities. Whereas structural linguists saw the basis of language in the social structure, other linguists looked at language as a mental process.
The 1957 publication of ”Syntactic Structures” by American linguist Noam Chomsky initiated what many views as a scientific revolution in linguistics. Chomsky sought a theory that would account for both linguistic structure and the creativity of language - the fact that we can create entirely original sentences and understand sentences never before uttered. He proposed that all people have an innate ability to acquire language. The task of the linguist, he claimed, is to describe this universal human ability, known as language competence, with a grammar from which the grammars of all languages could be derived. The linguist would develop this grammar by looking at the rules children use in hearing and speaking their first language. He termed the resulting model, or grammar, a transformational-generative grammar, referring to the transformations (or rules) that create (or account for) language. Certain rules, Chomsky asserted, are shared by all languages and form part of a universal grammar, while others are language specific and associated with particular speech communities. Since the 1960s much of the development in the field of linguistics has been a reaction to or against Chomsky's theories.
At the end of the 20th century, linguists used the term grammar primarily to refer to a subconscious linguistic system that enables people to produce and comprehend an unlimited number of utterances. Grammar thus accounts for our linguistic competence. Observations about the actual language we use, or language performance, are used to theorize about this invisible mechanism known as grammar.
The scientific study of language led by Chomsky has had an impact on nongenerative linguists as well. Comparative and historically oriented linguists are looking for the various ways linguistic universals show up in individual languages. Psycholinguists, interested in language acquisition, are investigating the notion that an ideal speaker-hearer is the origin of the acquisition process. Sociolinguists are examining the rules that underlie the choice of language variants, or codes, and allow for switching from one code to another. Some linguists are studying language performance - the way people use language - to see how it reveals a cognitive ability shared by all human beings. Others seek to understand animal communication within such a framework. What mental processes enable chimpanzees to make signs and communicate with one another and how do these processes differ from those of humans?
From these initial concerns came some of the great themes of twentieth-century philosophy. How exactly does language relate to thought? Are the irredeemable problems about putative private thought? These issues are captured under the general label ‘Lingual Turn’. The subsequent development of those early twentieth-century positions has led to a bewildering heterogeneity in philosophy in the early twenty-first century. the very nature of philosophy is itself radically disputed: Analytic, continental, postmodern, critical theory, feminist t, and non-Western, are all prefixes that give a different meaning when joined to ‘philosophy’. The variety of thriving different schools, the number of professional philosophers, the proliferation of publications, the development of technology in helping research as all manifest a radically different situation to that of one hundred years ago.
As with justification and knowledge, the traditional view of content has been strongly Internalist in character. The main argument for externalism derives from the philosophy y of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as 'direct reference' theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment, e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criterion employed by expects in his social group, etc. - not just on what is going on internally in his mind or brain.
An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought 'from the inside', simply by reflection. If content is depending on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors - which will not in general be available to the person whose belief or thought is in question.
The adoption of an externalist account of mental content would seem to support an externalist account of justification, apart from all contentful representation is a belief inaccessible to the believer, then both the justifying statuses of other beliefs in relation to that of the same representation are the status of that content, being totally rationalized by further beliefs for which it will be similarly inaccessible. Thus, contravening the Internalist requirement for justification, as an Internalist must insist that there are no justification relations of these sorts, that our internally associable content can also not be warranted or as stated or indicated without the deviated departure from a course or procedure or from a norm or standard in showing no deviation from traditionally held methods of justification exacting by anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalised account of content is mistaken.
Except for alleged cases of thing s that are evident for one just by being true, it has often been thought, anything is known must satisfy certain criteria as well as being true. Except for alleged cases of self-evident truths, it is often thought that anything that is known must satisfy certain criteria or standards. These criteria are general principles that will make a proposition evident or just make accepting it warranted to some degree. Common suggestions for this role include position ‘p’, e.g., that 2 + 2 = 4, ‘p’ is evident or, if ‘p’ coheres wit h the bulk of one’s beliefs, ‘p’ is warranted. These might be criteria whereby putative self-evident truths, e.g., that one clearly and distinctly conceive s ‘p’, ‘transmit’ the status as evident they already have without criteria to other proposition s like ‘p’, or they might be criteria whereby purely non-epistemic considerations, e.g., facts about logical connections or about conception that need not be already evident or warranted, originally ‘create’ p’s epistemic status. If that in turn can be ‘transmitted’ to other propositions, e.g., by deduction or induction, there will be criteria specifying when it is.
Nonetheless, of or relating to tradition a being previously characterized or specified to convey an idea indirectly, as an idea or theory for consideration and being so extreme a design or quality and lean towards an ecocatorial suggestion that implicate an involving responsibility that include: (1) if a proposition ‘p’, e.g., that 2 + 2 = 4, is clearly and distinctly conceived, then ‘p’ is evident, or simply, (2) if we can’t conceive ‘p’ to be false, then ‘p’ is evident: Or, (3) whenever are immediately conscious o f in thought or experience, e.g,, that we seem to see red, is evident. These might be criteria whereby putative self-evident truth s, e.g., that one clearly and distinctly conceives, e.g., that one clearly and distinctly conceives ‘p’, ‘transmit’ the status as evident they already have for one without criteria to other propositions like ‘p’. Alternatively, they might be criteria whereby epistemic status, e.g., p’s being evident, is originally created by purely non-epistemic considerations, e.g., facts about how ‘p’ is conceived which are neither self-evident is already criterial evident.
The result effect, holds that traditional criteria do not seem to make evident propositions about anything beyond our own thoughts, experiences and necessary truths, to which deductive or inductive criteria ma y be applied. Moreover, arguably, inductive criteria, including criteria warranting the best explanation of data, never make things evident or warrant their acceptance enough to count as knowledge.
Contemporary epistemologists suggest that traditional criteria may need alteration in three ways. Additional evidence may subject even our most basic judgements to rational correction, though they count as evident on the basis of our criteria. Warrant may be transmitted other than through deductive and inductive relations between propositions. Transmission criteria might not simply ‘pass’ evidence on linearly from a foundation of highly evident ‘premisses’ to ‘conclusions’ that are never more evident.
As with justification and knowledge, the traditional view of content has been strongly Internalist in character. The main argument for externalism derives from the philosophy y of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as 'direct reference' theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment, e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criterion employed by expects in his social group, etc. - not just on what is going on internally in his mind or brain.
An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought 'from the inside', simply by reflection. If content is depending on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors - which will not in general be available to the person whose belief or thought is in question.
The adoption of an externalist account of mental content would seem to support an externalist account of justification, apart from all contentful representation is a belief inaccessible to the believer, then both the justifying statuses of other beliefs in relation to that of the same representation are the status of that content, being totally rationalized by further beliefs for which it will be similarly inaccessible. Thus, contravening the Internalist requirement for justification, as an Internalist must insist that there are no justification relations of these sorts, that our internally associable content can also not be warranted or as stated or indicated without the deviated departure from a course or procedure or from a norm or standard in showing no deviation from traditionally held methods of justification exacting by anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalised account of content is mistaken.
Except for alleged cases of thing s that are evident for one just by being true, it has often been thought, anything is known must satisfy certain criteria as well as being true. Except for alleged cases of self-evident truths, it is often thought that anything that is known must satisfy certain criteria or standards. These criteria are general principles that will make a proposition evident or just make accepting it warranted to some degree. Common suggestions for this role include position ‘p’, e.g., that 2 + 2 = 4, ‘p’ is evident or, if ‘p’ coheres wit h the bulk of one’s beliefs, ‘p’ is warranted. These might be criteria whereby putative self-evident truths, e.g., that one clearly and distinctly conceive s ‘p’, ‘transmit’ the status as evident they already have without criteria to other proposition s like ‘p’, or they might be criteria whereby purely non-epistemic considerations, e.g., facts about logical connections or about conception that need not be already evident or warranted, originally ‘create’ p’s epistemic status. If that in turn can be ‘transmitted’ to other propositions, e.g., by deduction or induction, there will be criteria specifying when it is.
Nonetheless, of or relating to tradition a being previously characterized or specified to convey an idea indirectly, as an idea or theory for consideration and being so extreme a design or quality and lean towards an ecocatorial suggestion that implicate an involving responsibility that include: (1) if a proposition ‘p’, e.g., that 2 + 2 = 4, is clearly and distinctly conceived, then ‘p’ is evident, or simply, (2) if we can’t conceive ‘p’ to be false, then ‘p’ is evident: Or, (3) whenever are immediately conscious o f in thought or experience, e.g,, that we seem to see red, is evident. These might be criteria whereby putative self-evident truth s, e.g., that one clearly and distinctly conceives, e.g., that one clearly and distinctly conceives ‘p’, ‘transmit’ the status as evident they already have for one without criteria to other propositions like ‘p’. Alternatively, they might be criteria whereby epistemic status, e.g., p’s being evident, is originally created by purely non-epistemic considerations, e.g., facts about how ‘p’ is conceived which are neither self-evident is already criterial evident.
The result effect, holds that traditional criteria do not seem to make evident propositions about anything beyond our own thoughts, experiences and necessary truths, to which deductive or inductive criteria ma y be applied. Moreover, arguably, inductive criteria, including criteria warranting the best explanation of data, never make things evident or warrant their acceptance enough to count as knowledge.
Contemporary epistemologists suggest that traditional criteria may need alteration in three ways. Additional evidence may subject even our most basic judgements to rational correction, though they count as evident on the basis of our criteria. Warrant may be transmitted other than through deductive and inductive relations between propositions. Transmission criteria might not simply ‘pass’ evidence on linearly from a foundation of highly evident ‘premisses’ to ‘conclusions’ that are never more evident.
A group of statements, some of which purportedly provide support for another. The statements which purportedly provide the support are the premisses while the statement purportedly support is the conclusion. Arguments are typically divided into two categories depending on the degree of support they purportedly provide. Deductive arguments purportedly provide conclusive support for their conclusions while inductively supports the purported provision that inductive arguments purportedly provided only arguments purportedly in the providing probably of support. Some, but not all, arguments succeed in providing support for their conclusions. Successful deductive arguments are valid while successful inductive arguments are valid while successful inductive arguments are strong. An argument is valid just in case if all its premisses are true its conclusion is only probably true. Deductive logic provides methods for ascertaining whether or not an argument is valid whereas, inductive logic provides methods for ascertaining the degree of support the premisses of an argument confer on its conclusion.
Finally, proof, least of mention, is a collection of considerations and reasons that instill and sustain conviction that some proposed theorem-the theorem proved-is not only true, but could not possibly be false. A perceptual observation may instill the conviction that water is cold. But a proof that 2 + 5 = 5 must not only instill the conviction that is true that 2 + 3 = 5, but also that 2 + 3 could not be anything but the digit 5.
Contemporary philosophers of mind have typically supposed (or at least hoped) that the mind can be naturalized -, i.e., that all mental facts have explanations in the terms of natural science. This assumption is shared within cognitive science, which attempts to provide accounts of mental states and processes in terms (ultimately) of features of the brain and central nervous system. In the course of doing so, the various sub-disciplines of cognitive science (including cognitive and computational psychology and cognitive and computational neuroscience) postulate a number of different kinds of structures and processes, many of which are not directly implicated by mental states and processes as commonsensical conceived. There remains, however, a shared commitment to the idea that mental states and processes are to be explained in terms of mental representations.
In philosophy, recent debates about mental representation have centred around the existence of propositional attitudes (beliefs, desires, etc.) and the determination of their contents (how they come to be about what they are about), and the existence of phenomenal properties and their relation to the content of thought and perceptual experience. Within cognitive science itself, the philosophically relevant debates have been focussed on the computational architecture of the brain and central nervous system, and the compatibility of scientific and commonsense accounts of mentality.
Intentional Realists such as Dretske (e.g., 1988) and Fodor (e.g., 1987) note that the generalizations we apply in everyday life in predicting and explaining each other's behaviour (often collectively referred to as 'folk psychology') are both remarkably successful and indispensable. What a person believes, doubts, desires, fears, etc. is a highly reliable indicator of what that person will do. We have no other way of making sense of each other's behaviour than by ascribing such states and applying the relevant generalizations. We are thus committed to the basic truth of commonsense psychology and, hence, to the existence of the states its generalizations refer to. (Some realists, such as Fodor, also hold that commonsense psychology will be vindicated by cognitive science, given that propositional attitudes can be construed as computational relations to mental representations.)
Intentional Eliminativists, such as Churchland, (perhaps) Dennett and (at one time) Stich argue that no such things as propositional attitudes (and their constituent representational states) are implicated by the successful explanation and prediction of our mental lives and behaviour. Churchland denies that the generalizations of commonsense propositional-attitude psychology are true. He (1981) argues that folk psychology is a theory of the mind with a long history of failure and decline, and that it resists incorporation into the framework of modern scientific theories (including cognitive psychology). As such, it is comparable to alchemy and phlogiston theory, and ought to suffer a comparable fate. Commonsense psychology is false, and the states (and representations) it postulates simply don't exist. (It should be noted that Churchland is not an eliminativist about mental representation tout court.
Dennett (1987) grants that the generalizations of commonsense psychology are true and indispensable, but denies that this is sufficient reason to believe in the entities they appear to refer to. He argues that to give an intentional explanation of a system's behaviour is merely to adopt the 'intentional stance' toward it. If the strategy of assigning contentful states to a system and predicting and explaining its behaviour (on the assumption that it is rational -, i.e., that it behaves as it should, given the propositional attitudes it should have in its environment) is successful, then the system is intentional, and the propositional-attitude generalizations we apply to it are true. But there is nothing more to having a propositional attitude than this.
Though he has been taken to be thus claiming that intentional explanations should be construed instrumentally, Dennett (1991) insists that he is a 'moderate' realist about propositional attitudes, since he believes that the patterns in the behaviour and behavioural dispositions of a system on the basis of which we (truly) attribute intentional states to it are objectively real. In the event that there are two or more explanatorily adequate but substantially different systems of intentional ascriptions to an individual, however, Dennett claims there is no fact of the matter about what the system believes (1987, 1991). This does suggest an irrealism at least with respect to the sorts of things Fodor and Dretske take beliefs to be; though it is not the view that there is simply nothing in the world that makes intentional explanations true.
(Davidson 1973, 1974 and Lewis 1974 also defend the view that what it is to have a propositional attitude is just to be interpretable in a particular way. It is, however, not entirely clear whether they intend their views to imply irrealism about propositional attitudes.). Stich (1983) argues that cognitive psychology does not (or, in any case, should not) taxonomize mental states by their semantic properties at all, since attribution of psychological states by content is sensitive to factors that render it problematic in the context of a scientific psychology. Cognitive psychology seeks causal explanations of behaviour and cognition, and the causal powers of a mental state are determined by its intrinsic 'structural' or 'syntactic' properties. The semantic properties of a mental state, however, are determined by its extrinsic properties -, e.g., its history, environmental or intra-mental relations. Hence, such properties cannot figure in causal-scientific explanations of behaviour. (Fodor 1994 and Dretske 1988 are realist attempts to come to grips with some of these problems.) Stich proposes a syntactic theory of the mind, on which the semantic properties of mental states play no explanatory role.
It is a traditional assumption among realists about mental representations that representational states come in two basic varieties (Boghossian 1995). There are those, such as thoughts, which are composed of concepts and have no phenomenal ('what-it's-like') features ('Qualia'), and those, such as sensory experiences, which have phenomenal features but no conceptual constituents. (Non-conceptual content is usually defined as a kind of content that states of a creature lacking concepts but, nonetheless enjoy. On this taxonomy, mental states can represent either in a way analogous to expressions of natural languages or in a way analogous to drawings, paintings, maps or photographs. (Perceptual states such as seeing that something is blue, are sometimes thought of as hybrid states, consisting of, for example, a Non-conceptual sensory experience and a thought, or some more integrated compound of sensory and conceptual components.)
Some historical discussions of the representational properties of mind (e.g., Aristotle 1984, Locke 1689/1975, Hume 1739/1978) seem to assume that Non-conceptual representations - percepts ('impressions'), images ('ideas') and the like - are the only kinds of mental representations, and that the mind represents the world in virtue of being in states that resemble things in it. On such a view, all representational states have their content in virtue of their phenomenal features. Powerful arguments, however, focussing on the lack of generality (Berkeley 1975), ambiguity (Wittgenstein 1953) and non-compositionality (Fodor 1981) of sensory and imagistic representations, as well as their unsuitability to function as logical (Frége 1918/1997, Geach 1957) or mathematical (Frége 1884/1953) concepts, and the symmetry of resemblance (Goodman 1976), convinced philosophers that no theory of mind can get by with only Non-conceptual representations construed in this way.
Contemporary disagreement over Non-conceptual representation concerns the existence and nature of phenomenal properties and the role they play in determining the content of sensory experience. Dennett (1988), for example, denies that there are such things as Qualia at all; while Brandom (2002), McDowell (1994), Rey (1991) and Sellars (1956) deny that they are needed to explain the content of sensory experience. Among those who accept that experiences have phenomenal content, some (Dretske, Lycan, Tye) argue that it is reducible to a kind of intentional content, while others (Block, Loar, Peacocke) argue that it is irreducible.
The representationalist thesis is often formulated as the claim that phenomenal properties are representational or intentional. However, this formulation is ambiguous between a reductive and a non-deductive claim (though the term 'representationalism' is most often used for the reductive claim). On one hand, it could mean that the phenomenal content of an experience is a kind of intentional content (the properties it represents). On the other, it could mean that the (irreducible) phenomenal properties of an experience determine an intentional content. Representationalists such as Dretske, Lycan and Tye would assent to the former claim, whereas phenomenalists such as Block, Chalmers, Loar and Peacocke would assent to the latter. (Among phenomenalists, there is further disagreement about whether Qualia are intrinsically representational (Loar) or not (Block, Peacocke).
Most (reductive) representationalists are motivated by the conviction that one or another naturalistic explanation of intentionality is, in broad outline, correct, and by the desire to complete the naturalization of the mental by applying such theories to the problem of phenomenality. (Needless to say, most phenomenalists (Chalmers is the major exception) are just as eager to naturalize the phenomenal - though not in the same way.)
The main argument for representationalism appeals to the transparency of experience. The properties that characterize what it's like to have a perceptual experience are presented in experience as properties of objects perceived: in attending to an experience, one seems to 'see through it' to the objects and properties it is experiences of. They are not presented as properties of the experience itself. If nonetheless they were properties of the experience, perception would be massively deceptive. But perception is not massively deceptive. According to the representationalist, the phenomenal character of an experience is due to its representing objective, non-experiential properties. (In veridical perception, these properties are locally instantiated; in illusion and hallucination, they are not.) On this view, introspection is indirect perception: one comes to know what phenomenal features one's experience has by coming to know what objective features it represents.
In order to account for the intuitive differences between conceptual and sensory representations, representationalists appeal to their structural or functional differences. Dretske (1995), for example, distinguishes experiences and thoughts on the basis of the origin and nature of their functions: an experience of a property 'P' is a state of a system whose evolved function is to indicate the presence of 'P' in the environment; a thought representing the property 'P', on the other hand, is a state of a system whose assigned (learned) function is to calibrate the output of the experiential system. Rey (1991) takes both thoughts and experiences to be relations to sentences in the language of thought, and distinguishes them on the basis of (the functional roles of) such sentences' constituent predicates. Lycan (1987, 1996) distinguishes them in terms of their functional-computational profiles. Tye (2000) distinguishes them in terms of their functional roles and the intrinsic structure of their vehicles: thoughts are representations in a language-like medium, whereas experiences are image-like representations consisting of 'symbol-filled arrays.' (The account of mental images in Tye 1991.)
Phenomenalists tend to make use of the same sorts of features (function, intrinsic structure) in explaining some of the intuitive differences between thoughts and experiences; but they do not suppose that such features exhaust the differences between phenomenal and non-phenomenal representations. For the phenomenalist, it is the phenomenal properties of experiences - Qualia themselves - that constitute the fundamental difference between experience and thought. Peacocke (1992), for example, develops the notion of a perceptual 'scenario' (an assignment of phenomenal properties to coordinates of a three-dimensional egocentric space), whose content is 'correct' (a semantic property) if in the corresponding 'scene' (the portion of the external world represented by the scenario) properties are distributed as their phenomenal analogues are in the scenario.
Another sort of representation championed by phenomenalists (e.g., Block, Chalmers (2003) and Loar (1996)) is the 'phenomenal concept' -, a conceptual/phenomenal hybrid consisting of a phenomenological 'sample' (an image or an occurrent sensation) integrated with (or functioning as) a conceptual component. Phenomenal concepts are postulated to account for the apparent fact (among others) that, as McGinn (1991) puts it, 'you cannot form [introspective] concepts of conscious properties unless you yourself instantiate those properties.' One cannot have a phenomenal concept of a phenomenal property 'P', and, hence, phenomenal beliefs about P, without having experience of 'P', because 'P' itself is (in some way) constitutive of the concept of 'P'. (Jackson 1982, 1986 and Nagel 1974.)
Though imagery has played an important role in the history of philosophy of mind, the important contemporary literature on it is primarily psychological. In a series of psychological experiments done in the 1970s (summarized in Kosslyn 1980 and Shepard and Cooper 1982), subjects' response time in tasks involving mental manipulation and examination of presented figures was found to vary in proportion to the spatial properties (size, orientation, etc.) of the figures presented. The question of how these experimental results are to be explained has kindled a lively debate on the nature of imagery and imagination.
Kosslyn (1980) claims that the results suggest that the tasks were accomplished via the examination and manipulation of mental representations that they have spatial properties, i.e., pictorial representations, or images. Others, principally Pylyshyn (1979, 1981, 2003), argue that the empirical facts can be explained in terms exclusively of discursive, or propositional representations and cognitive processes defined over them. (Pylyshyn takes such representations to be sentences in a language of thought.)
The idea that pictorial representations are literally pictures in the head is not taken seriously by proponents of the pictorial view of imagery. The claim is, rather, that mental images represent in a way that is relevantly like the way pictures represent. (Attention has been focussed on visual imagery - hence the designation 'pictorial'; Though of course, there may imagery in other modalities - auditory, olfactory, etc. - as well.)
The distinction between pictorial and discursive representation can be characterized in terms of the distinction between analog and digital representation (Goodman 1976). This distinction has itself been variously understood (Fodor & Pylyshyn 1981, Goodman 1976, Haugeland 1981, Lewis 1971, McGinn 1989), though a widely accepted construal is that analog representation is continuous (i.e., in virtue of continuously variable properties of the representation), while digital representation is discrete (i.e., in virtue of properties a representation either has or doesn't have) (Dretske 1981). (An analog/digital distinction may also be made with respect to cognitive processes. (Block 1983.)) On this understanding of the analog/digital distinction, imagistic representations, which represent in virtue of properties that may vary continuously (such for being more or less bright, loud, vivid, etc.), would be analog, while conceptual representations, whose properties do not vary continuously (a thought cannot be more or less about Elvis: either it is or it is not) would be digital.
It might be supposed that the pictorial/discursive distinction is best made in terms of the phenomenal/nonphenomenal distinction, but it is not obvious that this is the case. For one thing, there may be nonphenomenal properties of representations that vary continuously. Moreover, there are ways of understanding pictorial representation that presuppose neither phenomenality nor analogicity. According to Kosslyn (1980, 1982, 1983), a mental representation is 'quasi-pictorial' when every part of the representation corresponds to a part of the object represented, and relative distances between parts of the object represented are preserved among the parts of the representation. But distances between parts of a representation can be defined functionally rather than spatially - for example, in terms of the number of discrete computational steps required to combine stored information about them. (Rey 1981.)
Tye (1991) proposes a view of images on which they are hybrid representations, consisting both of pictorial and discursive elements. On Tye's account, images are '(labelled) interpreted symbol-filled arrays.' The symbols represent discursively, while their arrangement in arrays has representational significance (the location of each 'cell' in the array represents a specific viewer-centred 2-D location on the surface of the imagined object)
The contents of mental representations are typically taken to be abstract objects (properties, relations, propositions, sets, etc.). A pressing question, especially for the naturalist, is how mental representations come to have their contents. Here the issue is not how to naturalize content (abstract objects can't be naturalized), but, rather, how to provide a naturalistic account of the content-determining relations between mental representations and the abstract objects they express. There are two basic types of contemporary naturalistic theories of content-determination, causal-informational and functional.
Causal-informational theories hold that the content of a mental representation is grounded in the information it carries about what does (Devitt 1996) or would (Fodor 1987, 1990) cause it to occur. There is, however, widespread agreement that causal-informational relations are not sufficient to determine the content of mental representations. Such relations are common, but representation is not. Tree trunks, smoke, thermostats and ringing telephones carry information about what they are causally related to, but they do not represent (in the relevant sense) what they carry information about. Further, a representation can be caused by something it does not represent, and can represent something that has not caused it.
The main attempts to specify what makes a causal-informational state a mental representation are Asymmetric Dependency Theories, the Asymmetric Dependency Theory distinguishes merely informational relations from representational relations on the basis of their higher-order relations to each other: informational relations depend upon representational relations, but not vice-versa. For example, if tokens of a mental state type are reliably caused by horses, cows-on-dark-nights, zebras-in-the-mist and Great Danes, then they carry information about horses, etc. If, however, such tokens are caused by cows-on-dark-nights, etc. because they were caused by horses, but not vice versa, then they represent horses.
According to Teleological Theories, representational relations are those a representation-producing mechanism has the selected (by evolution or learning) function of establishing. For example, zebra-caused horse-representations do not mean zebra, because the mechanism by which such tokens are produced has the selected function of indicating horses, not zebras. The horse-representation-producing mechanism that responds to zebras is malfunctioning.
Functional theories, hold that the content of a mental representation are well grounded in causal computational inferential relations to other mental portrayals other than mental representations. They differ on whether relata should include all other mental representations or only some of them, and on whether to include external states of affairs. The view that the content of a mental representation is determined by its inferential/computational relations with all other representations is holism; the view it is determined by relations to only some other mental states is localisms (or molecularism). (The view that the content of a mental state depends on none of its relations to other mental states is atomism.) Functional theories that recognize no content-determining external relata have been called solipsistic (Harman 1987). Some theorists posit distinct roles for internal and external connections, the former determining semantic properties analogous to sense, the latter determining semantic properties analogous to reference (McGinn 1982, Sterelny 1989)
(Reductive) representationalists (Dretske, Lycan, Tye) usually take one or another of these theories to provide an explanation of the (Non-conceptual) content of experiential states. They thus tend to be Externalists, about phenomenological as well as conceptual content. Phenomenalists and non-deductive representationalists (Block, Chalmers, Loar, Peacocke, Siewert), on the other hand, take it that the representational content of such states is (at least in part) determined by their intrinsic phenomenal properties. Further, those who advocate a phenomenology-based approach to conceptual content (Horgan and Tiensen, Loar, Pitt, Searle, Siewert) also seem to be committed to Internalist individuation of the content (if not the reference) of such states.
Generally, those who, like informational theorists, think relations to one's (natural or social) environment are (at least partially) determinative of the content of mental representations are Externalists (e.g., Burge 1979, 1986, McGinn 1977, Putnam 1975), whereas those who, like some proponents of functional theories, think representational content is determined by an individual's intrinsic properties alone, are internalists (or individualists).
This issue is widely taken to be of central importance, since psychological explanation, whether commonsense or scientific, is supposed to be both causal and content-based. (Beliefs and desires cause the behaviours they do because they have the contents they do. For example, the desire that one have a beer and the beliefs that there is beer in the refrigerator and that the refrigerator is in the kitchen may explain one's getting up and going to the kitchen.) If, however, a mental representation's having a particular content is due to factors extrinsic to it, it is unclear how its having that content could determine its causal powers, which, arguably, must be intrinsic. Some who accept the standard arguments for externalism have argued that internal factors determine a component of the content of a mental representation. They say that mental representations have both 'narrow' content (determined by intrinsic factors) and 'wide' or 'broad' content (determined by narrow content plus extrinsic factors). (This distinction may be applied to the sub-personal representations of cognitive science as well as to those of commonsense psychology.
Narrow content has been variously construed. Putnam (1975), Fodor (1982)), and Block (1986) for example, seems to understand it as something like dedictorial content (i.e., Frégean sense, or perhaps character, à la Kaplan 1989). On this construal, narrow content is context-independent and directly expressible. Fodor (1987) and Block (1986), however, has also characterized narrow content as radically inexpressible. On this construal, narrow content is a kind of proto-content, or content-determinant, and can be specified only indirectly, via specifications of context/wide-content pairings. Both, construe of as a narrow content and are characterized as functions from context to (wide) content. The narrow content of a representation is determined by properties intrinsic to it or its possessor such as its syntactic structure or its intra-mental computational or inferential role or its phenomenology.
Burge (1986) has argued that causation-based worries about externalist individuation of psychological content, and the introduction of the narrow notion, are misguided. Fodor (1994, 1998) has more recently urged that there may be no need to narrow its contentual representations, accountable for reasons of an ordering supply of naturalistic (causal) explanations of human cognition and action, since the sorts of cases they were introduced to handle, viz., Twin-Earth cases and Frége cases, are nomologically either impossible or dismissible as exceptions to non-strict psychological laws.
The leading contemporary version of the Representational Theory of Mind, the Computational Theory of Mind, claims that the brain is a kind of computer and that mental processes are computations. According to the computational theory of mind, cognitive states are constituted by computational relations to mental representations of various kinds, and cognitive processes are sequences of such states. The computational theory of mind and the representational theory of mind, may by attempting to explain all psychological states and processes in terms of mental representation. In the course of constructing detailed empirical theories of human and animal cognition and developing models of cognitive processes' implementable in artificial information processing systems, cognitive scientists have proposed a variety of types of mental representations. While some of these may be suited to be mental relata of commonsense psychological states, some - so-called 'subpersonal' or 'sub-doxastic' representations - are not. Though many philosophers believe that computational theory of mind can provide the best scientific explanations of cognition and behaviour, there is disagreement over whether such explanations will vindicate the commonsense psychological explanations of prescientific representational theory of mind.
According to Stich's (1983) Syntactic Theory of Mind, for example, computational theories of psychological states should concern themselves only with the formal properties of the objects those states are relations to. Commitment to the explanatory relevance of content, however, is for most cognitive scientists fundamental. That mental processes are computations, which computations are rule-governed sequences of semantically evaluable objects, and that the rules apply to the symbols in virtue of their content, are central tenets of mainstream cognitive science.
Explanations in cognitive science appeal to a many different kinds of mental representation, including, for example, the 'mental models' of Johnson-Laird 1983, the 'retinal arrays,' 'primal sketches' and '2½ -D sketches' of Marr 1982, the 'frames' of Minsky 1974, the 'sub-symbolic' structures of Smolensky 1989, the 'quasi-pictures' of Kosslyn 1980, and the 'interpreted symbol-filled arrays' of Tye 1991 - in addition to representations that may be appropriate to the explanation of commonsense
Psychological states. Computational explanations have been offered of, among other mental phenomena, belief.
The classicists hold that mental representations are symbolic structures, which typically have semantically evaluable constituents, and that mental processes are rule-governed manipulations of them that are sensitive to their constituent structure. The connectionists, hold that mental representations are realized by patterns of activation in a network of simple processors ('nodes') and that mental processes consist of the spreading activation of such patterns. The nodes themselves are, typically, not taken to be semantically evaluable; nor do the patterns have semantically evaluable constituents. (Though there are versions of Connectionism -, 'localist' versions - on which individual nodes are taken to have semantic properties (e.g., Ballard 1986, Ballard & Hayes 1984).) It is arguable, however, that localist theories are neither definitive nor representative of the Conceptionist program.
Classicists are motivated (in part) by properties thought seems to share with language. Jerry Alan Fodor's (1935-), Language of Thought Hypothesis, (Fodor 1975, 1987), according to which the system of mental symbols constituting the neural basis of thought is structured like a language, provides a well-worked-out version of the classical approach as applied to commonsense psychology. According to the language of a thought hypothesis, the potential infinity of complex representational mental states is generated from a finite stock of primitive representational states, in accordance with recursive formation rules. This combinatorial structure accounts for the properties of productivity and systematicity of the system of mental representations. As in the case of symbolic languages, including natural languages (though Fodor does not suppose either that the language of thought hypotheses explains only linguistic capacities or that only verbal creatures have this sort of cognitive architecture), these properties of thought are explained by appeal to the content of the representational units and their combinability into contentful complexes. That is, the semantics of both language and thought is compositional: the content of a complex representation is determined by the contents of its constituents and their structural configuration.
Connectionists are motivated mainly by a consideration of the architecture of the brain, which apparently consists of layered networks of interconnected neurons. They argue that this sort of architecture is unsuited to carrying out classical serial computations. For one thing, processing in the brain is typically massively parallel. In addition, the elements whose manipulation drive's computation in Conceptionist networks (principally, the connections between nodes) are neither semantically compositional nor semantically evaluable, as they are on the classical approach. This contrast with classical computationalism is often characterized by saying that representation is, with respect to computation, distributed as opposed to local: representation is local if it is computationally basic; and distributed if it is not. (Another way of putting this is to say that for classicists mental representations are computationally atomic, whereas for connectionists they are not.)
Moreover, connectionists argue that information processing as it occurs in Conceptionist networks more closely resembles some features of actual human cognitive functioning. For example, whereas on the classical view learning involves something like hypothesis formation and testing (Fodor 1981), on the Conceptionist model it is a matter of evolving distribution of 'weight' (strength) on the connections between nodes, and typically does not involve the formulation of hypotheses regarding the identity conditions for the objects of knowledge. The Conceptionist network is 'trained up' by repeated exposure to the objects it is to learn to distinguish; and, though networks typically require many more exposures to the objects than do humans, this seems to model at least one feature of this type of human learning quite well.
Further, degradation in the performance of such networks in response to damage is gradual, not sudden as in the case of a classical information processor, and hence more accurately models the loss of human cognitive function as it typically occurs in response to brain damage. It is also sometimes claimed that Conceptionist systems show the kind of flexibility in response to novel situations typical of human cognition - situations in which classical systems are relatively 'brittle' or 'fragile.'
Some philosophers have maintained that Connectionism entails that there are no propositional attitudes. Ramsey, Stich and Garon (1990) have argued that if Conceptionist models of cognition are basically correct, then there are no discrete representational states as conceived in ordinary commonsense psychology and classical cognitive science. Others, however (e.g., Smolensky 1989), hold that certain types of higher-level patterns of activity in a neural network may be roughly identified with the representational states of commonsense psychology. Still others argue that language-of-thought style representation is both necessary in general and realizable within Conceptionist architectures, collect the central contemporary papers in the classicist/Conceptionist debate, and provides useful introductory material as well.
Whereas Stich (1983) accepts that mental processes are computational, but denies that computations are sequences of mental representations, others accept the notion of mental representation, but deny that computational theory of mind provides the correct account of mental states and processes.
Van Gelder (1995) denies that psychological processes are computational. He argues that cognitive systems are dynamic, and that cognitive states are not relations to mental symbols, but quantifiable states of a complex system consisting of (in the case of human beings) a nervous system, a body and the environment in which they are embedded. Cognitive processes are not rule-governed sequences of discrete symbolic states, but continuous, evolving total states of dynamic systems determined by continuous, simultaneous and mutually determining states of the systems components. Representation in a dynamic system is essentially information-theoretic, though the bearers of information are not symbols, but state variables or parameters.
Horst (1996), on the other hand, argues that though computational models may be useful in scientific psychology, they are of no help in achieving a philosophical understanding of the intentionality of commonsense mental states. Computational theory of mind attempts to reduce the intentionality of such states to the intentionality of the mental symbols they are relations to. But, Horst claims, the relevant notion of symbolic content is essentially bound up with the notions of convention and intention. So the computational theory of mind involves itself in a vicious circularity: the very properties that are supposed to be reduced are (tacitly) appealed to in the reduction.
To say that a mental object has semantic properties is, paradigmatically, to say that it may be about, or be true or false of, an object or objects, or that it may be true or false simpliciter. Suppose I think that you took to sniffing snuff. I am thinking about you, and if what I think of you (that they take snuff) is true of you, then my thought is true. According to representational theory of mind such states are to be explained as relations between agents and mental representations. To think that you take snuff is to token in some way a mental representation whose content is that ocelots take snuff. On this view, the semantic properties of mental states are the semantic properties of the representations they are relations to.
Linguistic acts seem to share such properties with mental states. Suppose I say that you take snuff. I am talking about you, and if what I say of you (that they take snuff) is true of them, then my utterance is true. Now, to say that you take snuff is (in part) to utter a sentence that means that you take snuff. Many philosophers have thought that the semantic properties of linguistic expressions are inherited from the intentional mental states they are conventionally used to express. On this view, the semantic properties of linguistic expressions are the semantic properties of the representations that are the mental relata of the states they are conventionally used to express.
It is also widely held that in addition to having such properties as reference, truth-conditions and truth - so-called extensional properties - expressions of natural languages also have intensional properties, in virtue of expressing properties or propositions - i.e., in virtue of having meanings or senses, where two expressions may have the same reference, truth-conditions or truth value, yet express different properties or propositions (Frége 1892/1997). If the semantic properties of natural-language expressions are inherited from the thoughts and concepts they express (or vice versa, or both), then an analogous distinction may be appropriate for mental representations.
Theories of representational content may be classified according to whether they are atomistic or holistic and according to whether they are externalistic or internalistic, whereby, emphasizing the priority of a whole over its parts. Furthermore, in the philosophy of language, this becomes the claim that the meaning of an individual word or sentence can only be understood in terms of its relation to an indefinitely larger body of language, such as à whole theory, or even a whole language or form of life. In the philosophy of mind a mental state similarly may be identified only in terms of its relations with others. Moderate holism may allow the other things besides these relationships also count; extreme holism would hold that a network of relationships is all that we have. A holistic view of science holds that experience only confirms or disconfirms large bodies of doctrine, impinging at the edges, and leaving some leeway over the adjustment that it requires.
Once, again, in the philosophy of mind and language, the view that what is thought, or said, or experienced, is essentially dependent on aspects of the world external to the mind of the subject. The view goes beyond holding that such mental states are typically caused by external factors, to insist that they could not have existed as they now do without the subject being embedded in an external world of a certain kind. It is these external relations that make up the essence or identify of the mental state. Externalism is thus opposed to the Cartesian separation of the mental from the physical, since that holds that the mental could in principle exist as it does even if there were no external world at all. Various external factors have been advanced as ones on which mental content depends, including the usage of experts, the linguistic, norms of the community. And the general causal relationships of the subject. In the theory of knowledge, externalism is the view that a person might know something by being suitably situated with respect to it, without that relationship being in any sense within his purview. The person might, for example, be very reliable in some respect without believing that he is. The view allows that you can know without being justified in believing that you know.
However, atomistic theories take a representation's content to be something that can be specified independent entity of that representation' s relations to other representations. What the American philosopher of mind, Jerry Alan Fodor (1935-) calls the crude causal theory, for example, takes a representation to be a
cow
- a menial representation with the same content as the word 'cow' - if its tokens are caused by instantiations of the property of being-a-cow, and this is a condition that places no explicit constraints on how
cow
's must or might relate to other representations. Holistic theories contrasted with atomistic theories in taking the relations à representation bears to others to be essential to its content. According to functional role theories, a representation is a
cow
if it behaves like a
cow
should behave in inference.
Internalist theories take the content of a representation to be a matter determined by factors internal to the system that uses it. Thus, what Block (1986) calls 'short-armed' functional role theories are Internalist. Externalist theories take the content of a representation to be determined, in part at least, by factors external to the system that uses it. Covariance theories, as well as telelogical theories that invoke an historical theory of functions, take content to be determined by 'external' factors. Crossing the atomist-holistic distinction with the Internalist-externalist distinction.
Externalist theories (sometimes called non-individualistic theories) have the consequence that molecule for molecule identical cognitive systems might yet harbour representations with different contents. This has given rise to a controversy concerning 'narrow' content. If we assume some form of externalist theory is correct, then content is, in the first instance 'wide' content, i.e., determined in part by factors external to the representing system. On the other hand, it seems clear that, on plausible assumptions about how to individuate psychological capacities, internally equivalent systems must have the same psychological capacities. Hence, it would appear that wide content cannot be relevant to characterizing psychological equivalence. Since cognitive science generally assumes that content is relevant to characterizing psychological equivalence, philosophers attracted to externalist theories of content have sometimes attempted to introduce 'narrow' content, i.e., an aspect or kind of content that is equivalent internally equivalent systems. The simplest such theory is Fodor's idea (1987) that narrow content is a function from contents (i.e., from whatever the external factors are) to wide contents.
All the same, what a person expresses by a sentence is often a function of the environment in which he or she is placed. For example, the disease I refer to by the term like 'arthritis', or the kind of tree I refer to as a 'Maple' will be defined by criteria of which I know next to nothing. This raises the possibility of imagining two persons in rather different environments, but in which everything appears the same to each of them. The wide content of their thoughts and sayings will be different if the situation surrounding them is appropriately different: 'situation' may include the actual objects they perceive or the chemical or physical kinds of object in the world they inhabit, or the history of their words, or the decisions of authorities on what counts as an example, of one of the terms they use. The narrow content is that part of their thought which remains identical, through their identity of the way things appear, regardless of these differences of surroundings. Partisans of wide content may doubt whether any content in this sense narrow, partisans of narrow content believer that it is the fundamental notion, with wide content being explicable in terms of narrow content plus context.
Even so, the distinction between facts and values has outgrown its name: it applies not only to matters of fact vs, matters of value, but also to statements that something is, vs. statements that something ought to be. Roughly, factual statements - 'is statements' in the relevant sense - represent some state of affairs as obtaining, whereas normative statements - evaluative, and deontic ones - attribute goodness to something, or ascribe, to an agent, an obligation to act. Neither distinction is merely linguistic. Specifying a book's monetary value is making a factual statement, though it attributes a kind of value. 'That is a good book' expresses a value judgement though the term 'value' is absent (nor would 'valuable' be synonymous with 'good'). Similarly, 'we are morally obligated to fight' superficially expresses a statement, and 'By all indications it ough to rain' makes a kind of ought-claim; but the former is an ought-statement, the latter an (epistemic) is-statement.
Theoretical difficulties also beset the distinction. Some have absorbed values into facts holding that all value is instrumental, roughly, to have value is to contribute - in a factual analysable way - to something further which is (say) deemed desirable. Others have suffused facts with values, arguing that facts (and observations) are 'theory-impregnated' and contending that values are inescapable to theoretical choice. But while some philosophers doubt that fact/value distinctions can be sustained, there persists a sense of a deep difference between evaluating, and attributing an obligation and, on the other hand, saying how the world is.
Fact/value distinctions, may be defended by appeal to the notion of intrinsic value, as a thing has in itself and thus independently of its consequences. Roughly, a value statement (proper) is an ascription of intrinsic value, one to the effect that a thing is to some degree good in itself. This leaves open whether ought-statements are implicitly value statements, but even if they imply that something has intrinsic value - e.g., moral value - they can be independently characterized, say by appeal to rules that provide (justifying) reasons for action. One might also ground the fact value distinction in the attributional (or even motivational) component apparently implied by the making of valuational or deontic judgements: Thus, 'it is a good book, but that is no reason for a positive attribute towards it' and 'you ought to do it, but there is no reason to' seem inadmissible, whereas, substituting, 'an expensive book' and 'you will do it' yields permissible judgements. One might also argue that factual judgements are the kind which are in principle appraisable scientifically, and thereby anchor the distinction on the factual side. This ligne is plausible, but there is controversy over whether scientific procedures are 'value-free' in the required way.
Philosophers differ regarding the sense, if any, in which epistemology is normative (roughly, valuational). But what precisely is at stake in this controversy is no clearly than the problematic fact/value distinction itself. Must epistemologists as such make judgements of value or epistemic responsibility? If epistemology is naturalizable, then even epistemic principles simply articulate under what conditions - say, appropriate perceptual stimulations - a belief is justified, or constitutes knowledge. Its standards of justification, then would be like standards of, e.g., resilience for bridges. It is not obvious, however, that there appropriate standards can be established without independent judgements that, say, a certain kind of evidence is good enough for justified belief (or knowledge). The most plausible view may be that justification is like intrinsic goodness, though it supervenes on natural properties, it cannot be analysed wholly in factual statements.
Thus far, belief has been depicted as being all-or-nothing, however, as a resulting causality for which we have grounds for thinking it true, and, all the same, its acceptance is governed by epistemic norms, and, least of mention, it is partially subject to voluntary control and has functional affinities to belief. Still, the notion of acceptance, like that of degrees of belief, merely extends the standard picture, and does not replace it.
Traditionally, belief has been of epistemological interest in its propositional guise: 'S' believes that 'p', where 'p' is a reposition towards which an agent, 'S' exhibits an attitude of acceptance. Not all belief is of this sort. If I trust you to say, I believer you. And someone may believe in Mr. Radek, or in a free-market economy, or in God. It is sometimes supposed that all belief is 'reducible' to propositional belief, belief-that. Thus, my believing you might be thought a matter of my believing, is, perhaps, that what you say is true, and your belief in free markets or God, is a matter of your believing that free-market economies are desirable or that God exists.
Some philosophers have followed St. Thomas Aquinas (1225-74), in supposing that to believer in God is simply to believer that certain truths hold while others argue that belief-in is a distinctive attitude, on that includes essentially an element of trust. More commonly, belief-in has been taken to involve a combination of propositional belief together with some further attitude.
The moral philosopher Richard Price (1723-91) defends the claim that there are different sorts of belief-in, some, but not all reducible to beliefs-that. If you believer in God, you believer that God exists, that God is good, you believer that God is good, etc. But according to Price, your belief involves, in addition, a certain complex pro-attitude toward its object. Even so, belief-in outruns the evidence for the corresponding belief-that. Does this diminish its rationality? If belief-in presupposes believes-that, it might be thought that the evidential standards for the former must be, at least, as high as standards for the latter. And any additional pro-attitude might be thought to require a further layer of justification not required for cases of belief-that.
Belief-in may be, in general, less susceptible to alternations in the face of unfavourable evidence than belief-that. A believer who encounters evidence against God's existence may remain unshaken in his belief, in part because the evidence does not bear on his pro-attitude. So long as this ids united with his belief that God exists, and reasonably so - in a way that an ordinary propositional belief that would not.
The correlative way of elaborating on the general objection to justificatory externalism challenges the sufficiency of the various externalist conditions by citing cases where those conditions are satisfied, but where the believers in question seem intuitively not to be justified. In this context, the most widely discussed examples have to do with possible occult cognitive capacities, like clairvoyance. Considering the point in application once, again, to reliabilism, the claim is that to think that he has such a cognitive power, and, perhaps, even good reasons to the contrary, is not rational or responsible and therefore not epistemically justified in accepting the belief that result from his clairvoyance, despite the fact that the reliablist condition is satisfied.
One sort of response to this latter sorts of an objection is to 'bite the bullet' and insist that such believers are in fact justified, dismissing the seeming intuitions to the contrary as latent Internalist prejudice. A more widely adopted response attempts to impose additional conditions, usually of a roughly Internalist sort, which will rule out the offending example, while stopping far of a full internalism. But, while there is little doubt that such modified versions of externalism can handle particular cases, as well enough to avoid clear intuitive implausibility, the usually problematic cases that they cannot handle, and also whether there is and clear motivation for the additional requirements other than the general Internalist view of justification that externalist is committed to reject.
A view in this same general vein, one that might be described as a hybrid of internalism and externalism holds that epistemic justification requires that there is a justicatory factor that is cognitively accessible to the believer in question (though it need not be actually grasped), thus ruling out, e.g., a pure reliabilism. At the same time, however, though it must be objectively true that beliefs for which such a factor is available are likely to be true, in addition, the fact need not be in any way grasped or cognitively accessible to the believer. In effect, of the premises needed to argue that a particular belief is likely to be true, one must be accessible in a way that would satisfy at least weak internalism, the Internalist will respond that this hybrid view is of no help at all in meeting the objection and has no belief nor is it held in the rational, responsible way that justification intuitively seems to require, for the believer in question, lacking one crucial premise, still has no reason at all for thinking that his belief is likely to be true.
An alternative to giving an externalist account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., a result of a reliable process (and perhaps, further conditions as well). This makes it possible for such a view to retain Internalist account of epistemic justification, though the centrality of that concept to epistemology would obviously be seriously diminished.
Such an externalist account of knowledge can accommodate the commonsense conviction that animals, young children, and unsophisticated adults' posse's knowledge, though not the weaker conviction (if such a conviction does exist) that such individuals are epistemically justified in their beliefs. It is, at least, less vulnerable to Internalist counter-examples of the sort discussed, since the intuitions involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge, for which is accepted or advanced as true or real on the basis of less than conclusive evidence, as can only be assumed to have. In particular, does it have any serious bearing on traditional epistemological problems and on the deepest and most troubling versions of scepticism, which seems in fact to be primarily concerned with justification, and knowledge?`
A rather different use of the terms 'internalism' and 'externalism' have to do with the issue of how the content of beliefs and thoughts is determined: According to an Internalist view of content, the content of such intention states depends only on the non-relational, internal properties of the individual's mind or grain, and not at all on his physical and social environment: While according to an externalist view, content is significantly affected by such external factors and suggests a view that appears of both internal and external elements are standardly classified as an external view.
As with justification and knowledge, the traditional view of content has been strongly Internalist in character. The main argument for externalism derives from the philosophy y of language, more specifically from the various phenomena pertaining to natural kind terms, indexicals, etc. that motivate the views that have come to be known as 'direct reference' theories. Such phenomena seem at least to show that the belief or thought content that can be properly attributed to a person is dependant on facts about his environment, e.g., whether he is on Earth or Twin Earth, what is fact pointing at, the classificatory criterion employed by expects in his social group, etc. - not just on what is going on internally in his mind or brain.
An objection to externalist account of content is that they seem unable to do justice to our ability to know the content of our beliefs or thought 'from the inside', simply by reflection. If content is depending on external factors pertaining to the environment, then knowledge of content should depend on knowledge of these factors - which will not in general be available to the person whose belief or thought is in question.
The adoption of an externalist account of mental content would seem to support an externalist account of justification, apart from all contentful representation is a belief inaccessible to the believer, then both the justifying statuses of other beliefs in relation to that of the same representation are the status of that content, being totally rationalized by further beliefs for which it will be similarly inaccessible. Thus, contravening the Internalist requirement for justification, as an Internalist must insist that there are no justification relations of these sorts, that our internally associable content can also not be warranted or as stated or indicated without the deviated departure from a course or procedure or from a norm or standard in showing no deviation from traditionally held methods of justification exacting by anything else: But such a response appears lame unless it is coupled with an attempt to show that the externalised account of content is mistaken.
Except for alleged cases of thing s that are evident for one just by being true, it has often been thought, anything is known must satisfy certain criteria as well as being true. Except for alleged cases of self-evident truths, it is often thought that anything that is known must satisfy certain criteria or standards. These criteria are general principles that will make a proposition evident or just make accepting it warranted to some degree. Common suggestions for this role include position ‘p’, e.g., that 2 + 2 = 4, ‘p’ is evident or, if ‘p’ coheres wit h the bulk of one’s beliefs, ‘p’ is warranted. These might be criteria whereby putative self-evident truths, e.g., that one clearly and distinctly conceive s ‘p’, ‘transmit’ the status as evident they already have without criteria to other proposition s like ‘p’, or they might be criteria whereby purely non-epistemic considerations, e.g., facts about logical connections or about conception that need not be already evident or warranted, originally ‘create’ p’s epistemic status. If that in turn can be ‘transmitted’ to other propositions, e.g., by deduction or induction, there will be criteria specifying when it is.
Nonetheless, of or relating to tradition a being previously characterized or specified to convey an idea indirectly, as an idea or theory for consideration and being so extreme a design or quality and lean towards an ecocatorial suggestion that implicate an involving responsibility that include: (1) if a proposition ‘p’, e.g., that 2 + 2 = 4, is clearly and distinctly conceived, then ‘p’ is evident, or simply, (2) if we can’t conceive ‘p’ to be false, then ‘p’ is evident: Or, (3) whenever are immediately conscious o f in thought or experience, e.g,, that we seem to see red, is evident. These might be criteria whereby putative self-evident truth s, e.g., that one clearly and distinctly conceives, e.g., that one clearly and distinctly conceives ‘p’, ‘transmit’ the status as evident they already have for one without criteria to other propositions like ‘p’. Alternatively, they might be criteria whereby epistemic status, e.g., p’s being evident, is originally created by purely non-epistemic considerations, e.g., facts about how ‘p’ is conceived which are neither self-evident is already criterial evident.
The result effect, holds that traditional criteria do not seem to make evident propositions about anything beyond our own thoughts, experiences and necessary truths, to which deductive or inductive criteria ma y be applied. Moreover, arguably, inductive criteria, including criteria warranting the best explanation of data, never make things evident or warrant their acceptance enough to count as knowledge.
Contemporary epistemologists suggest that traditional criteria may need alteration in three ways. Additional evidence may subject even our most basic judgements to rational correction, though they count as evident on the basis of our criteria. Warrant may be transmitted other than through deductive and inductive relations between propositions. Transmission criteria might not simply ‘pass’ evidence on linearly from a foundation of highly evident ‘premisses’ to ‘conclusions’ that are never more evident.
A group of statements, some of which purportedly provide support for another. The statements which purportedly provide the support are the premisses while the statement purportedly support is the conclusion. Arguments are typically divided into two categories depending on the degree of support they purportedly provide. Deductive arguments purportedly provide conclusive support for their conclusions while inductively supports the purported provision that inductive arguments purportedly provided only arguments purportedly in the providing probably of support. Some, but not all, arguments succeed in providing support for their conclusions. Successful deductive arguments are valid while successful inductive arguments are valid while successful inductive arguments are strong. An argument is valid just in case if all its premisses are true its conclusion is only probably true. Deductive logic provides methods for ascertaining whether or not an argument is valid whereas, inductive logic provides methods for ascertaining the degree of support the premisses of an argument confer on its conclusion.
Finally, proof, least of mention, is a collection of considerations and reasonings that instill and sustain conviction that some proposed theorem-the theorem proved-is not only true, but could not possibly be false. A perceptual observation may instill the conviction that water is cold. But a proof that 2 + 5 = 5 must not only instill the conviction that is true that 2 + 3 = 5, but also that 2 + 3 could not be anything but the digit 5.
No one has succeeded in replacing this largely psychological characterization of proofs by a more objective characterization. The representations of reconstructions of proofs as mechanical and semiotical derivation in formal-logical systems all but completely fail to capture ‘proofs’ as mathematicians are quite content to give them. For example, formal-logical derivations depend solely on the logical form of the considered proposition, whereas usually proofs depend in large measure on content of propositions other than their logical form
No one has succeeded in replacing this largely psychological characterization of proofs by a more objective characterization. The representations of reconstructions of proofs as mechanical and semiotical derivation in formal-logical systems all but completely fail to capture ‘proofs’ as mathematicians are quite content to give them, fas or example, formal-logical .
EVOLVING PRINCIPLES OF THOUGHT
THE HUMAN CONDITION
BOOK TWO
A period of geologic time is an intermittent unit of time that geologists use to divide the earth’s history. On the geologic time scale, a period is longer than an epoch and shorter than an era. The earth is about 4.5 billions of year’s old. Earth scientists divide its age into shorter blocks of time. The largest of these are eons, of which there are three in the earth’s history. The last eon is formally divided into eras, which are made up of periods. Many periods are divided into epochs. Geological or biological events mark the beginnings and ends of some periods, but some are based on a convenient interval of time determined by radiometric dating.
Geologists divide much of the earth’s history into periods. Too little is accedingly obscure about pre-Archaean time, from the origin of the earth to 3.8 billion years ago, to divide it into units. The Archaean Eon: 3.8 to 2.5 billion years before present) is not divided into periods. It marks a time in which the structure of the earth underwent many changes and the first life appeared on the earth. Rocks of the Archaean Eon contain some very simple single-cell organisms called Prokaryotes and early blue-green algae colonies called stromatolites. During the Proterozoic Eon (2.5 billion to 570 million years before present) the earth was partially covered alternately by shallow seas and ice sheets. Life advanced from the most basic single-celled organisms into plants and mysteriously to animals that resembled some species that is actively living today. Pre-Archaean time, the Archaean Eon, and the Proterozoic Eon make up what is called Precambrian time. The most recent eon of the earth is the Phanerozoic (570 million years before present to the present). During this eon, the earth and life on it gradually changed to their present state.
Some scientists divide the Proterozoic Eon into four eras and at least ten periods. These divisions are not universally accepted. The eras defined for the Proterozoic is the Huronian Era (2.5 billion to 2.2 billion years before present), the Animikean Era (2.2 billion to 1.65 billion years before present), the Riphean Era (1.65 billion to 800 million years before present), and the Sinian Era (800 million to 570 million years before present). The four informally recognized periods of the Huronian Era are the Elliot Lake Period, the Hough Lake Period, the Quirke Lake Period, and the Cobalt Period, from oldest too youngest. These periods correspond only to deposits in a region of Canada around Lake Superior and have no definite time correlation. A comparison of rocks of the Elliot Lake and Cobalt periods show that oxygen levels in the atmosphere rose during the Huronian Era. The Hough Lake, Quirke Lake, and Cobalt periods all begin with times of glaciation.
The Animikean Era has only one informally acceptant amount of time, in that of a time called the Gunflint Period, which lasted from about 2.2 billion years before present to about two billion years before present. Rocks of the Gunflint Period contain many species of microbes and stromatolites.
The Riphean Era has three informal periods. The oldest period is the Burzian Period (1.65 billion to 1.35 billion years before present), followed by the Yurmatin Period (1.35 billion to 1.05 billion years before present) and then the Karatau Period (from 1.05 billion to 800 million years before present). All three are named from sedimentary rocks in a section of the southern Ural Mountains in Russia.
The Sinian Era is divided into two informal geologic periods-the Sturtian Periods (from 800 million to 610 million years before present) and the Vendian Period (610 million to 570 million years awaiting the presence to the future). The Sturtian is named from rocks in southern Australia that show two distinct glacial episodes. The Vendian is named from rocks in the southern Ural Mountains. The Vendian Period is divided into two epochs, the Varanger Epoch (about 610 million to 590 million years awaiting the presence to the future) and the Ediacara Epoch (590 million to 570 million years awaiting the presence to the future). Rocks from the Ediacara Epoch show the first fossils of complex organisms.
The Phanerozoic Eon is the most recent eon of the earth and is divided into the Paleozoic Era (570 million to 240 million years awaiting the presence to the future), the Mesozoic Era (240 million to 65 million years awaiting the presence to the future), and the Cenozoic Era (65 million years awaiting the presence to the future to the present).
The periods of the Paleozoic Era are the Cambrian Period (570 million to 500 million years awaiting the presence to the future), the Ordovician Period (500 million to 435 million years awaiting the presence to the future), the Silurian Period (435 million to 410 million years awaiting the presence to the future), the Devonian Period (410 million to 360 million years awaiting the presence to the future), the Carboniferous Period (360 million to 290 million years awaiting the presence to the future), and the Permian Period (290 million to 240 million years awaiting). The rocks of the Paleozoic Era contain abundant and diverse fossils, so each period is marked by both geologic and biological events.
The rocks of the Cambrian Period contain many fossils of shelled animals such as trilobites, gastropods, and brachiopods that are not present in earlier rocks. The Ordovician Period is characterized by an abundance of extinct floating marine organisms called graptolites. One of the greatest mass extinctions of the Phanerozoic Eon occurred at the end of the Ordovician Period.
Rocks from the Silurian Period reveal the first evidence of plants and insects on land and the first fossils of fishes with jaws. In the Devonian Period, the first animals with backbones appeared on land. The Devonian was the first period to produce the substantial organic deposits that are used today as energy sources.
The rocks of the Carboniferous Period contain about one-half of the world’s coal supplies, created by the remains of the vast population of animals and plants of that period. Besides the abundance of terrestrial vegetation, the first winged insects appeared during the Carboniferous.
During the Permian Period all the continents on the earth came together to form one landmass, called Pangaea. The shallow inland seas of the Permian created an environment in which invertebrate marine life flourished. At the end of the period, one of the greatest extinctions in the earth’s history occurred, wiping out most species on the planet.
The Mesozoic Era is composed of the Triassic Period (240 million to 205 million years awaiting the presence to the future), the Jurassic Period (205 million to 138 million years awaiting the presence to the future), and the Cretaceous Period (138 million to 65 million years awaiting the presence to the future). During the Triassic Period, the super-continent of Pangea began to break apart. Dinosaurs first appeared during the Triassic, as did the earliest mammals.
The continents continued to break apart during the Jurassic Period. Reptiles, including the dinosaurs, flourished, taking over ecological niches on the land, in the sea, and in the air, while mammals remained small and rodent like. The continents continued to drift toward their present locations during the Cretaceous Period. Another mass extinction, which killed off the large reptiles such as the dinosaurs, occurred near the end of the Cretaceous.
The Cenozoic Era is divided into the Tertiary Period (65 million to 1.6 million years awaiting the present) and the Quaternary Period (1.6 million years awaiting the presence to the future to the present). During the Tertiary Period the continents assumed their current positions. Mammals became the dominant life forms on the planet during this period, and the direct ancestors of humans appeared at the end of the Tertiary. The most recent ice age occurred during the Quaternary Period. The first humans appeared during the Quaternary. The changing climate and melting of the glaciers, possibly combined with hunting by humans, drove many large mammals of the early Quaternary to extinction, making way for the animal life on the earth today.
The Precambrian is a time span that includes the Archaean and Proterozoic eons that heed of shape as practically as four billion years ago. The Precambrian marks the first formation of continents, the oceans, the atmosphere, and life. The Precambrian represents the oldest chapter in Earth’s history that can still be studied. Notably little outlasted of Earth from the period of 4.6 billion to about four billion years ago due to the melting of rock caused by the early period of meteorite bombardment. Rocks dating from the Precambrian, however, have been found in Africa, Antarctica, Australia, Brazil, Canada, and Scandinavia. Some zircon mineral grains deposited in Australian rock layers have been dated to 4.2 billion years.
The Precambrian is also the longest chapter in Earth’s history, spanning a period of about 3.5 billion years. During this time frame, the atmosphere and the oceans formed from gases that escaped from the hot interior of the planet because of widespread volcanic eruptions. The early atmosphere consisted primarily of nitrogen, carbon dioxide, and water vapour. As Earth continued to cool, the water vapour condensed out and fell as precipitation to form the oceans. Some scientists believe that much of Earth’s water vapour originally came from comets containing frozen water that struck Earth during meteorite bombardment.
By studying 2-billion-year-old rocks found in northwestern Canada, and 2.5-billion-year-old rocks in China, scientists have found evidence that plate tectonics began shaping Earth’s surface as early as the middle Precambrian. About a billion years ago, the Earth’s plates were entered around the South Pole and formed a super-continent called Rodinia. Slowly, pieces of this super-continent broke away from the central continent and travelled north, forming smaller continents.
Life originated during the Precambrian. The earliest fossil evidence of life consists of Prokaryotes, one-celled organisms that lacked a nucleus and reproduced by dividing, a process known as asexual reproduction. Asexual division meant that a prokaryote’s genetic heritage that had been productively unaltered. The first Prokaryotes were bacteria known as archaebacteria. Scientists believe they came into existence perhaps as early as 3.8 billion years ago, but of an unequivocal practicality of some 3.5 billion years ago, and where anaerobic-that is, they did not require oxygen to produce energy. Free oxygen barely existed in the atmosphere of the early Earth.
Archaebacteria were followed about 3.46 billion years ago by another type of prokaryote known as Cyanobacteria or blue-green algae. These Cyanobacteria gradually introduced oxygen in the atmosphere because of photosynthesis. In shallow tropical waters, Cyanobacteria formed mats that grew into humps called stromatolites. Fossilized stromatolites have been found in rocks in the Pilbara region of western Australia that are more than 3.4 billion years old. As, some rocks found in the Gunflint Chert region of northwest Lake Superior extend over an age of about 2.1 billion years old.
For billions of years, life existed only in the simple form of Prokaryotes. Prokaryotes were referentially followed by an advanced eucaryote, organisms that have a nucleus in their cells and that reproduces by combining or sharing their heredity makeup rather than by simply dividing. Sexual reproduction marked a milestone in life on Earth because it created the possibility of hereditary variation and enabled organisms to adapt more easily to a changing environment. The latest part of Precambrian time some 560 million to 545 million years ago saw the appearance of an intriguing group of fossil organisms known as the Ediacaran fauna, these were first discovered in the northern Flinders Range region of Australia in the mid-1940s and subsequent findings in many locations throughout the world, these strange fossils might be the precursors of many fossil groups that were to explode in Earth's oceans in the Paleozoic Era.
At the start of the Paleozoic Era about 543 million years ago, an enormous expansion in the diversity and complexity of life occurred. This event took place in the Cambrian Period and is called the Cambrian explosion. Nothing like it has happened since. Most of all set-groups of animals known today made their initial arrival during the Cambrian explosion. Most of the different ‘body plans’ are found in animals today-that is, the way and animal’s body is designed, with heads, legs, rear ends, claws, tentacles, or antennae-also originated during this period.
Fishes first appeared during the Paleozoic Era, and multicellular plants began growing on the land. Other land animals, such as scorpions, insects, and amphibians, also originated during this time. Just as new forms of life were being created, however, other forms of life were going out of existence. Natural selection meant that some species could flourish, while others failed. In fact, mass extinctions of animal and plant species were commonplace.
Most of the early complex life forms of the Cambrian explosion lived in the sea. The creation of warm, shallow seas, along with the buildup of oxygen in the atmosphere, may have aided this explosion of life forms. The shallow seas were created by the breakup of the super-continent Rodinia. During the Ordovician, Silurian, and Devonian periods, which followed the Cambrian Period and lasted from 490 million to 354 million years ago, some continental pieces that had broken off Rodinia collided. These collisions resulted in larger continental masses in equatorial regions and in the Northern Hemisphere. The collisions built many numbers of mountain ranges, including parts of the Appalachian Mountains in North America and the Caledonian Mountains of northern Europe.
Toward the close of the Paleozoic Era, two large continental masses, Gondwanaland to the south and Laurasia to the north, faced each other across the equator. They are slow but eventful collision during the Permian Period of the Paleozoic Era, which lasted from 290 million to 248 million years ago, assembled the super-continent Pangaea and resulted within several grandest mountains in the history of Earth. These mountains included other parts of the Appalachians and the Ural Mountains of Asia. At the close of the Paleozoic Era, Pangaea represented more than 90 percent of all the continental landmasses. Pangaea straddled the equator with a huge mouth-like opening that faced east. This opening was the Tethys Ocean, which closed as India moved northward creating the Himalayas. The last remnants of the Tethys Ocean can be seen in today’s Mediterranean Sea.
The Paleozoic Ere spread an end with a major extinction event, when perhaps as many as 90 percent of all plant and animal species died out. The reason is not known for sure, but many scientists believe that huge volcanic outpourings of lavas in central Siberia, coupled with an asteroid impact, resulting among the fragmented contributive factors.
The Mesozoic Era, sprang into formation and are approximately 248 million years ago, is often characterized as the Age of Reptiles because reptiles were the dominant life forms during this era. Reptiles dominated not only on land, as dinosaurs, but also in the sea, as the plesiosaurs and ichthyosaurs, and in the air, as pterosaurs, which were flying reptiles.
The Mesozoic Era is divided into three geological periods: the Triassic, which lasted from 248 million to 206 million years ago; the Jurassic, from 206 million to 144 million years ago; and the Cretaceous, from 144 million to 65 million years ago. The dinosaurs emerged during the Triassic Period and was one of the most successful animals in Earth’s history, lasting for about 180 million years before going extinct at the end of the Cretaceous Period. The first and mammals and the first flowering plants also appeared during the Mesozoic Era. Before flowering plants emerged, plants with seed-bearing cones known as conifers were the dominant form of plants. Flowering plants soon replaced conifers as the dominant form of vegetation during the Mesozoic Era.
Mesozoic was an eventful era geologically with many changes to Earth’s surface. Pangaea continued to exist for another 50 million years during the early Mesozoic Era. By the early Jurassic Period, Pangaea began to break up. What is now South America begun splitting from what is now Africa, and in the process the South Atlantic Ocean formed? As the landmass that became North America drifted away from Pangaea and moved westward, a long Subduction zone extended along North America’s western margin. This Subduction zone and the accompanying arc of volcanoes extended from what is now Alaska to the southern tip of South America. Abounding of this focus, called the American Cordillera, exists today as the eastern margin of the Pacific Ring of Fire.
During the Cretaceous Period, heat continued to be released from the margins of the drifting continents, and as they slowly sank, vast inland seas formed in much of the continental interiors. The fossilized remains of fishes and marine mollusks called ammonites can be found today in the middle of the North American continent because these areas were once underwater. Large continental masses broke off the northern part of southern Gondwanaland during this period and began to narrow the Tethys Ocean. The largest of these continental masses, present-day India, moved northward toward its collision with southern Asia. As both the North Atlantic Ocean and South Atlantic Ocean continued to open, North and South America became isolated continents for the first time in 450 million years. Their westward journey resulted in mountains along their western margins, including the Andes of South America.
Birds are members of a group of animals called vertebrates, which possess a spinal column or backbone. Other vertebrates are fish, amphibians, reptiles, and mammals. Many characteristics and behaviours of birds are distinct from all other animals, yet they have noticeable similarities. Like mammals, birds have four-chambered hearts and are warm-blooded-having a proportionally constant body temperature that enables them to live in a variety of environments. Like reptiles, birds develop from embryos in eggs outside the mother’s body.
Birds are found worldwide in many habitats. They can fly over some of the highest mountains on earth plus both of the earth’s poles, dive through water to depths of more than 250 m.’s (850 ft.), and occupy habitats with the most extreme climates on the planet, including arctic tundra and the Sahara Desert. Certain kinds of seabirds are commonly seen over the open ocean thousands of kilometres from the nearest land, but all birds must come ashore to raise their young.
Highly-developed animals, birds are sensitive and responsive, colourful and graceful, with habits that excite interest and inquiry. People have long been fascinated by birds, in part because birds are found in great abundance and variety in the same habitats in which humans thrive. Like people, most species of birds are active during daylight hours. Humans find inspiration in birds’ capacity for flight and in their musical calls. Humans also find birds useful-their flesh and eggs for food, their feathers for warmth, and their companionship. Perhaps a key basis for our rapport with birds is the similarity of our sensory worlds: Both birds and humans rely more heavily on hearing and colour vision than on smell. Birds are useful indicators of the quality of the environment, because the health of bird populations mirrors the health of our environment. The rapid declination of bird populations and the accelerating extinction rates of birds in the world’s forests, grasslands, wetlands, and islands are therefore reasons for great concern.
Birds vary in size from the tiny bee hummingbird, which measures about 57 mm. (about 2.25 in.) from a beak tip to tail tips and weigh 1.6 g. (0.06 oz.), to the ostrich, which stands 2.7 m. (9 ft.) tall and weighs up to 156 kg. (345 lb.). The heaviest flying bird is the great bustard, which can weigh up to 18 kg. (40 lb.).
All birds are covered with feathers, collectively called plumage, which are specialized structures of the epidermis, or outer layer of skin. The main component of feathers is keratin, a flexible protein that also forms the hair and fingernails of mammals. Feathers provide the strong yet lightweight surface area needed for powered, aerodynamic flight. They also serve as insulation, trapping pockets of air to help birds conserve their body heat. The varied patterns, colours, textures, and shapes of feathers help birds to signal their age, sex, social status, and species that identity of one another. Some birds have plumage that blends in with their surroundings to provide camouflage, helping these birds escape notice by their predators. Birds use their beaks to preen their feathers, often using oil from a gland at the base of their tails. Preening removes dirt and parasites and keeps feathers waterproof and supple. Because feathers are nonliving structures that cannot repair themselves when bigeneric or broken, and they must be renewed periodically. Most adult birds molt-lose and replace their feathers-at least once a year.
Bird wings are highly modified forelimbs with a skeletal structure resembling that of arms. Wings may be long or short, round or pointed. The shape of a bird’s wings influences its style of flight, which may consist of gliding, soaring, or flapping. Wings are powered by flight muscles, which are the largest muscles in birds that fly. Flight muscles are found in the chest and are attached to the wings by large tendons. The breastbone, a large bone shaped like the keel of a boat, supports the flight muscles.
Nearly all birds have a tail, which helps them control the direction in which they fly and play a role in landing. The paired flight feathers of the tail, called retrices, extend from the margins of a bird’s tail. Smaller feathers called coverts’ lie on top of the retrices. Tails may be square, rounded, pointed, or forked, depending on the lengths of the retrices and the way they end. The shapes of bird tails vary more than the shapes of wings, possibly because tail shape is less critical to flight than wing shape. Many male birds, such as pheasants, have ornamental tails that they use to attract mating partners.
Birds have two legs; the lower part of each leg is called the tarsus. Most birds had four toes on each foot, and in many birds, including all songbirds, the first toe, called a hallux, points backwards. Bird toes are adapted in various species for grasping perches, climbing, swimming, capturing prey, and carrying and manipulating food.
Instead of heavy jaws with teeth, modern birds have toothless, lightweight jaws, called beaks or bills. Unlike humans or other mammals, birds can move their upper jaws independently of the rest of their heads. This helps them to open their mouths extremely wide. Beaks occur in a wide range of shapes and sizes, depending on the type of food a bird eats.
The eyes of birds are large and provide excellent vision. They are protected by three eyelids: An upper lid resembling that of humans, a lower lid that closes when a bird sleeps, and a third lid, called a nictitating membrane, that sweeps across the eye sideways, starting from the side near the beak. This lid is a thin, translucent fold of skin that moistens and cleans the eye and protects it from wind and bright light.
The ears of birds are completely internal, with openings placed just behind and below the eyes. In most birds, textured feathers called auriculars form a protective screen that prevents objects from entering the ear. Birds rely on their ears for hearing and for balance, which is especially critical during flight. Two groups of birds, cave swiftlets and oilbirds, find their way in dark places by echolocation-making clicks or rattle calls and interpreting the returning echoes to obtain clues about their environment.
The throats of nearly all birds contain a syrinx (plural, syringes), an organ that is comparable to the voice box of mammals. The syrinx has two membranes that produce sound when they vibrate. Birds classified as songbirds have a peculiarly greater extent-developed syringe. Some songbirds, such as the wood thrush, can control each membrane independently; in this way they can sing two songs simultaneously.
Birds have well-developed brains, which provide acute sensory perception, keen balance and coordination, and instinctive behaviours, along with a surprising degree of intelligence. Parts of the bird brain that are especially developed are the optic lobes, where nerve impulses from the eyes are processed, and the cerebellum, which coordinates muscle actions. The cerebral cortex, the part of the brain responsible for thought in humans, is primitive in birds. However, birds have a hyperstriatum -a forebrain component that mammals lack. This part of the brain helps songbirds to learn their songs, and scientists believe that it may also be the source of bird intelligence.
The internal body parts of all birds, including flightless ones, reflects the evolution of birds as flying creatures. Birds have lightweight skeletons in which many major bones are hollow. A unique feature of birds is the furculum, or wishbone, which is comparable to the collarbones of humans, although in birds the left and right portions are fused. The furculum absorbs the shock of wing motion and acts as a spring to help birds breathe while they fly. Several anical adaptations help to reduce weight and concentrate it near the centre of gravity. For example, modern birds are toothless, which helps reduce the weight of their beaks, and food grinding is carried out in the muscular gizzard, a part of the stomach near the body’s core. The egg-laying habit of birds enables their young to develop outside the body of the female, significantly lightening her load. For further weight reduction, the reproductive organs of birds atrophy, or become greatly reduced in size, except the breeding season.
Flight, especially taking off and landing, requires a huge amount of energy-more than humans need even for running. Taking flight is less demanding for small birds than it is for large ones, but small birds need more energy to stay warm. In keeping with their enormous energy needs, birds have an extremely fast metabolism, which includes the chemical reactions involved in releasing stored energy from food. The high body temperature of birds-40° to 42° C.’s (104° to about 108° F.’s)-provides an environment that supports rapid chemical reactions.
To sustain this high-speed metabolism, birds need an abundant supply of oxygen, which combines with food molecules within cells to release energy. The respiratory, or breathing, system of birds is adapted to meet their special needs. Unlike humans, birds have lungs with an opening at each end. New air entered the lungs from one end, and used air goes out the other end. The lungs are connected to a series of air sacs, which simplify the movement of air. Birds breathe faster than any other animal. For example, a flying pigeon breathes 450 times each minute, whereas a human, when running, might breathe only about 30 times each minute.
The circulatory system of birds also functions at high speed. Blood vessels pick up oxygen in the lungs and carry it, along with nutrients and other substances essential to life, to all of a bird’s body tissues. In contrast to the human heart, which beats about 160 times per minute when a person runs, a small bird’s heart beats between 400 and 1,000 times per minute. The hearts of birds are proportionately larger than the hearts of other animals. Birds that migrate and those that live at high altitudes have larger hearts, compared with their body size, than other birds.
The characteristic means locomotion in birds is flight. However, birds are also variously adapted for movement on land, and some are excellent swimmers and divers.
Like aeroplanes, birds rely on lift-an upward force that counters gravity-to fly. Birds generate lift by pushing down on the air with their wings. This action causes the air, in return, to push the wings up. The shape of wings, which have an upper surface that is convex and a lower surface that is concave, contributes to this effect. To turn, birds often tilt so that one wing is higher than the other.
Different wing shapes adapt birds for different styles of flight. The short, rounded wings and strong breast muscles of quail are ideal for short bursts of powered flight. Conversely, the albatross’s long narrow wings enable these birds to soar effortlessly over windswept ocean surfaces. The long, broad wings of storks, vultures, and eagles provide excellent lift on rising air currents.
Feathers play a crucial role in flight. The wings and tails of birds have detailed Flight feathers-the largest and strongest type of feathers-that contribute to lift. Because each of the flight feathers is connected to a muscle, birds can adjust their position individually. As a bird pushes down on the air with its wings, its flight feathers overlap to prevent air from passing through. The same feathers twist open on the upstroke, so that air flows between them and less effort is needed to lift the wings.
Feathers also help to reduce drag, a force of resistance that acts on solid bodies moving through air. Contour feathers, which are the most abundant type of feather, fill and cover angular parts of a bird’s body, giving birds a smooth, aerodynamic form.
Bird tails are also important to flight. Birds tip their tail feathers in different directions to achieve stability and to help change direction while flying. When soaring, birds spread their tail feathers to obtain more lift. When landing, birds turn their tails downward, so that their tails act like brakes.
Most birds can move their legs alternately to walk and run, and some birds are adept at climbing trees. Birds’ agility on land varies widely among different species. The American robin both hops and walks, while the starling usually walks. The ostrich can run as fast as
64 km./h. (40 mph.). Swifts, however, can neither hop nor run; their weak feet are useful only for clinging to vertical surfaces, such as the walls of caves and houses.
Birds that walk in shallow water, such as herons and stilts, have long legs that simplify wading. Jacanas, which walk on lily pads and mud, have long toes and nails that disperse their weight to help prevent them from sinking. Penguins have stubby legs placed far back from their centre of gravity. So, they can walk only with an upright posture and a short-stepping gait. When penguins need to move quickly, they ‘toboggan’ on their bellies, propelling themselves across ice with their wings and feet.
Many birds are excellent swimmers and divers, including such distantly related types of birds as grebes, loons, ducks, auks, cormorants, penguins, and diving petrels. Most of these birds have webbed or lobed toes that act as paddles, which they use to propel themselves underwater. Others, including auks and penguins, use their wings to propel themselves through the water. Swimming birds have broad, raft-like bodies that provide stability. They have dense feather coverings that hold pockets of air for warmth, but they can compress the air out of these pockets to reduce buoyancy when diving.
Many fish-catching birds can dive to great depths, either from the air or from the water’s surface. The emperor penguin can plunge into depths of more than 250 m. (850 ft.) and remain submerged for about 12 minutes. Some ducks, swans, and geese perform an action called dabbling, in which they tip their tails up and reach down with their beaks to forage on the mud beneath shallow water.
Like other animals, birds must eat, rest, and defend themselves against predators to survive. They must also reproduce and raise their young to contribute to the survival of their species. For many bird species, migration is an essential part of survival. Birds have acquired remarkably diverse and effective strategies for achieving these ends.
Birds spend much of their time feeding and searching for food. Most birds cannot store large reserves of food internally, because the extra weight would prevent them from flying. Small birds need to eat even more frequently than large ones, because they have a greater surface area in proportion to their weight and therefore lose their body heat more quickly. Some extremely small birds, such as hummingbirds, have so little food in reserve that they enter a state resembling hibernation during the night and rely on the warmth of the sun to energize them in the morning.
Depending on the species, birds eat insects, fish, meat, seeds, nectar, and fruit. Most birds are either carnivorous, meaning they eat other animals, or herbivorous, meaning they eat plant material. Many birds, including crows and gulls, are omnivorous, eating almost anything. Many herbivorous birds feed protein-rich animal material to their undergoing maturation. Some bird species have highly abstractive diets, such as the Everglade kite, which feeds exclusively on snails.
Two unusual internal organs help birds to process food. The gizzard, which is part of a bird’s stomach, has thick muscular walls with hard inner ridges. It can crush large seeds and even shellfish. Some seed-eating birds swallow small stones so that the gizzard will grind food more efficiently. Birds that feed on nectar and soft fruits have poorly developed gizzards.
Most birds have a crop-a sac-like extension of the esophagus, the tubular organ through which food passes after leaving the mouth. Some birds store food in their crops and transport it to the place where they sleep. Others use the crop to carry food that they will later regurgitate to their offspring.
The bills of birds are modified in ways that help birds obtain and handle food. Nectar-feeders, such as hummingbirds, have long thin bills, which they insert into flowers, and particularized expansible or brushlike tongues, through which they draw up nectar. Meat-eating birds, including hawks, owls, and shrikes, have strong, hooked bills that can tear flesh. Many fish-eating birds, such as merganser ducks, have tooth-like ridges on their bills that help them to hold their slippery prey. The thick bills and strong jaw muscle of various finches and sparrows are ideal for crushing seeds. Woodpeckers use their bills as chisels, working into dead or living wood to find insect larvae and excavate nest cavities.
At least two species of birds use tools in obtaining food. One is the woodpecker finch, which uses twigs or leaf stalks to extract insects from narrow crevices in trees. The other is the Egyptian vulture, which picks up large stones in its bill and throws them at ostrich eggs to crack them open.
Birds need far less sleep than humans do. Birds probably sleep to relax their muscles and conserve energy but not to refresh their brains. Many seabirds, in particular, sleep very little. For example, the sooty tern, which rarely plummet into settling on water, may fly for several years with only brief periods of sleep lasting a few seconds each. Flying is so effortless for the sooty tern and other seabirds that it takes virtually no energy at all.
Most birds are active during the day and sleep at night. Exceptions are birds that hunt at night, such as owls and night jars. Birds use nests for sleeping only during the breeding season. The rest of the year, birds sleep in shrubs, on tree branches, in holes in trees, and on the bare ground. Most ducks sleep on the water. Many birds stand while they sleep, and some birds sleep while perched on some branch-sometimes using only one foot. These birds can avoid falling over because of a muscle arrangement that causes their claws to tighten when they bend their legs to relax.
To reproduce, birds must find a suitable mate, or mates, and the necessary resources-food, water, and nesting materials-for caring for their eggs and raising the hatched young to independence. Most birds mate during a specific season in a particular habitat, although some birds may reproduce in varied places and seasons, provided environmental conditions are suitable.
Most of all birds have monogamous mating patterns, meaning that one male and one female mate exclusively with each other for at least one season. However, some bird species is either polygynous, that is, the males mate with more than one female, or polyandrous, in which case the females mate with more than one male. Among many types of birds, including some jays, several adults, rather than a single breeding pair, often help to raise the young within an individual nest.
Birds rely heavily on their two main senses, vision and hearing, in courtship and breeding. Among most songbirds, including the nightingale and the sky lark, males use song to establish breeding territories and attract mates. In many species, female songbirds may be attracted to males that sing the loudest, longest, or most varied songs. Many birds, including starlings, mimic the sounds of other birds. This may help males to achieve sufficiently varied songs to attract females.
Many birds rely on visual displays of their feathers to obtain a mating partner. For example, the blue bird of paradise hangs upside down from a tree branch to show off the dazzling feathers of its body and tail. A remarkable courtship strategy is exhibited by male bowerbirds of Australia and New Guinea. These birds attract females by building bowers for shelter, which they decorate with colourful objects such as flower petals, feathers, fruit, and even human-made items such as ribbons and tinfoil.
Among some grouse, cotingas, the small wading birds called shorebirds, hummingbirds, and other groups, males gather in areas called leks to attract mates through vocal and visual displays. Females visiting the leks select particularly impressive males, and often only one or a very few males effectively mate. Among western grebes, both males and females participate in a dramatic courtship ritual called rushing, in which mating partners lift their upper bodies far above the water and paddle rapidly to race side by side over the water’s surface. Although male birds usually court females, there are some types of birds, including the phalaropes, in which females court males.
Many birds establish breeding territories, which they defend from rivals of the same species. In areas where suitably nesting habitats is limited, birds may nest in large colonies. An example is the crab plover, which sometimes congregates by the thousands in areas of only about 0.6 hectares (about 1.5 acres).
For breeding, most birds build nests, which help them to incubate, or warm, the developing eggs. Nests sometimes offer camouflage from predators and physical protection from the elements. Nests may be elaborate constructions or some mere scrapes on the ground. Some birds, including many shorebirds, incubate their eggs without any type of nest at all. The male emperor penguin of icy Antarctica incubates the single egg on top of its feet under a fold of skin.
Bird nests range in size from the tiny cups of hummingbirds to the huge stick nests of eagles, which may weigh a ton or more. Some birds, such as the mallee-fowl of southern Australia, use external heat sources, such as decaying plant material, to incubate their eggs. Many birds, including woodpeckers, use tree cavities for nests. Others, such as cowbirds and cuckoos, are brood parasites; they neither build nests nor care for their young. Instead, females of these species lay their eggs in the nests of birds of other species, so that the eggs are incubated-and hatchling duration’s bobbed up to raised birds, in so, that they and other hatchling chicks are than the hatchlings’ unfeigned by the same rearing nest.
Incubation by one or both parents works with the nest structure to provide an ideal environment for the eggs. The attending parent may warm the eggs with a part of its belly called the brood patch. Bird parents may also wet or shade the eggs to prevent them from overheating.
The size, shape, colour, and texture of a bird egg are specific to each species. Eggs provide an ideal environment for the developing embryo. The shells of eggs are made from calcium carbonates. They contain thousands of pores through which water can evaporate and air can seep in, enabling the developing embryo to breathe. The number of eggs in a clutch (the egg or eggs laid by a female bird in one nesting effort) may be 15 or more for some birds, including pheasants. In contrast, some large birds, such as condors and albatross, may lay only a single egg every two years. The eggs of many songbirds hatch after developing for as few as ten days, whereas those of an albatross and kiwis may require 80 days or more.
Among some birds, including songbirds and pelicans, newly hatched younkers that are without feathers, blind, and incapable of regulating their body temperature. Many other birds, such as ducks, are born covered with down and can feed themselves within hours after hatching. Depending on the species, young birds may remain in the nest for as little as part of a day or as long as several months. Grown older from their young (those that has left the nest) may still rely on parental care for many days or weeks. Only about 10 percent of birds survive their first year of life; the rest die of starvation, disease, predators, or inexperience with the behaviours necessary for survival. The age at which birds begin to breed varies from less than a year in many songbirds and some quail to ten years or more in some albatross. The life spans of birds in the wild are poorly known. Many small songbirds live only three to five years, whereas some albatrosses are known to have survived more than 60 years in the wild.
The keen eyesight and acute hearing of birds help them react quickly to predators, which may be other birds, such as falcons and hawks, or other types of animals, such as snakes and weasels. Many small birds feed in flocks, where they can benefit from the observing power of multiple pairs of eyes. The first bird in a flock to spot a predator usually warns the others with an alarm call.
Birds that feed alone commonly rely on camouflage and rapid flight as means of evading predators. Many birds have highly specific and unusual defence strategies. The burrowing owl in North America, which lives in the burrows of ground squirrels, frightens away predators by making a call that sounds much like a rattlesnake. The snipe, a wading bird, flees from its enemies with a zigzag flight pattern that is hard for other birds to follow.
Many bird species undergo annual migrations, travelling between seasonally productive habitats. Migration helps birds to have continuous sources of food and water, and to avoid environments that are too hot or too cold. Most spectacular of bird migrations are made by seabirds, in which they fly across oceans and along coastlines, sometimes travelling 32,000 km. (20,000 mi.) or more in a single year. Migrating birds use a variety of cues to find their way. These include the positions of the sun during the day and the stars at night; the earth’s magnetic field; and visual, olfactory, and auditory landmarks. The strict formations in which many birds fly help them on the journey, for example, migrating geese travel in a V-shaped formation, which enables all of the geese except the leader to take advantage of the updrafts generated by the flapping wings of the goose in front. Young birds of many species undertake their first autumn migration with no guidance from experienced adults. These inexperienced birds do not necessarily reach their destinations; many birds stray in the wrong direction and are sometimes observed thousands of kilometres away from their normal route.
There are nearly 10,000 known species of modern or recently extinct birds. Traditionally, taxonomists (those who classify living things based on evolutionary relationships) have looked at bird characteristics such as skeletal structure, plumage, and bill shape to determine which birds have a shared evolutionary history. More recently, scientists have turned to deoxyribonucleic acid (DNA)-the genetic information found in the cells of all living organisms-for clues about relationships among birds. DNA is useful to volaille bird taxonomists because closely related birds have more similar DNA than do groups of birds that are distantly related. DNA comparisons have challenged some of scientists’ previous ideas about relationships among birds. For example, these studies have revealed that vultures of the Americas are more closely related to storks than to the vultures of Europe, Asia, or Africa.
Another method of categorizing birds focuses on adaptive types, or lifestyles. This system groups together birds that live in similar environments or have similar methods for obtaining food. Even among a given adaptive types, birds show tremendous diversity.
Aquatic birds obtain most or all of their food from the water. All aquatic birds that live in saltwater environments have salt glands, which enable them to drink seawater and excrete the excess salt. Albatross, shearwaters, storm petrels, and diving petrels are considered the most exclusively aquatic of all birds. These birds spend much of their time over the open ocean, well away from land.
Many other birds have aquatic lifestyles but live closer to land. Among these are penguins, which live in the southernmost oceans near the Antarctic. Some species of penguins spend most of their lives in the water, coming on land only to reproduce and molt. Grebes and divers, or loons, are found on or near lakes. Grebes are unusual among birds because they make their nests on the water, using floating plant materials that they hide among reeds. Pelicans, known for their long bills and huge throat pouches, often switch between salt water and fresh water habitats during the year. Gulls are generalists among the aquatic birds, feeding largely by scavenging over open water, along shores, or even inland areas. Waterfowls, a group that includes ducks, geese, and swans, often breed on freshwater lakes and marshes, although they sometimes make their homes in marine habitats.
Many long-legged, long-billed birds are adapted to live at the junction of land and water. Large wading birds, including herons, storks, ibises, spoonbills, and flamingoes, are found throughout the world, except near the poles. These birds wade in shallow water or across mudflats, wet fields, or similar environments to find food. Depending on the species, large wading birds may eat fish, frogs, shrimp, or microscopic marine life. Many large wading birds gather in enormous groups to feed, sleep, or nest. Shorebirds often inhabit puddles or other shallow bodies of water. The diversity of shorebirds is reflected in their varied bill shapes and leg lengths. The smallest North American shorebirds, called stints or peeps, have short, thin bills that enable them to pick at surface prey, whereas curlews probe with their long bills for burrowing shellfish and marine worms that are beyond the reach of most other shore feeders. Avocets and stilts have long legs and long bills, both of which help them to feed in deeper water.
Among the best-known birds are the birds of prey. Some, including hawks, eagles, and falcons, are active during the daytime. Others, notably owls, are nocturnal, or active at night. Birds of prey have hooked beaks, strong talons or claws on their feet, and keen eyesight and hearing. The larger hawks and eagles prey on small mammals, such as rodents and other vertebrates. Some birds of prey, such as the osprey and many eagles, eat fish. Falcons eat mainly insects, and owls, depending on the species, have diets ranging from insects to fish and mammalians. Scavengers that feed on dead animals are also considered birds of prey. These include relatives of eagles called Old World vultures, which live in Eurasia and Africa, and the condors and vultures of North and South America.
Some birds, including the largest of all living birds, have lost the ability to fly. The ostriches and their relatives-rheas, emus, cassowaries, and kiwis-are flightless birds settling in Africa, South America, and Australia, including New Guinea and New Zealand. The tinamous of Central and South America are related to the ostrich group, but they have a limited ability to fly. Other birds that feed primarily on the ground and exceed as excellent runners include the bustards (relatives of the cranes) and megapodes, members of a group of chicken-like birds that includes quail, turkeys, pheasants, and grouse. Vegetation is an important part of the diets of running birds.
More than half of all living species of birds are perching birds. Perching birds have been successful in all terrestrial habitats. Typically small birds, perching birds have a distinctive arrangement of toes and leg tendons that enables them to perch acrobatically on small twigs. They have the most satisfactorially-developed and complex vocalizations of all birds. They are divided into two main groups: the sub-oscines, which are mainly tropical and include tyrant flycatchers, antbirds, and oven-birds, and the oscines or songbirds, which make up about 80 percent of all perching bird species, among them the familiar sparrows, finches, warblers, crows, blackbirds, thrushes, and swallow. Some birds of this group catch and feed upon flying insects. An example is the swallow, which opens its mouth in a large trap-like gape to gather food. One apparent group, the dippers, is aquatic; its members obtain their food during short dives in streams and rivers.
Many other groups of birds thrive in terrestrial habitats. Parrots, known for their brilliantly coloured plumage, form a distinctive group of tropical and southern temperate birds that inhabit woodlands and grasslands. Doves and pigeons, like parrots, are seed and fruit eaters but are more widespread and typically more subdued in colour. The cuckoos-including the tree-dwelling species such as the European cuckoo, whose call is mimicked by the cuckoo clock, and ground-inhabiting species, such as roadrunners-are land birds. Hummingbirds are a group of nectars and insect-feeding land birds whose range extends from Alaska to the tip of South America. Woodpeckers and their relatives thrive in forests. Kingfishers are considered land birds despite their habit of eating fish.
Although birds collectively occupy most of the earth’s surfaces, most individual species are found only in particular regions and habitats. Some species are quite restricted, occurring only on a single oceanic island or an isolated mountaintop, whereas others are cosmopolitan, living in suitable habitats on most continents. The greatest species diversity occurs in the tropics in North and South America, extending from Mexico to South America. This part of the world is especially rich in tyrant flycatchers, oven-birds, antbirds, tanagers, and hummingbirds. The Australia and New Guinea region have possibly the most distinguishing groups of birds, because its birds have long been isolated from those of the rest of the world. Emus, cassowaries, and several songbird groups, including birds-of-paradise, are found nowhere else. Africa is the unique home to many bird families, including turacos, secretary birds, and helmet-shrikes. Areas that are further from the equator have fewer diverse birds. For example, about 225 bird species breed in the British Isles-approximately half the number of breeding strains that inhabit a single reserve in Ecuador or Peru. Despite the abundance of seabirds at its fringes, Antarctica is the poorest bird continent, with only about 20 species.
The habitats occupied by birds are also diverse. Tropical rain forests have high species diversity, as do savannas and wetlands. Fewer species generally occupy extremely arid habitats and very high elevations. A given species might be a habitat specialist, such as the marsh wren, which lives only in marshes of cattails or tules, or a generalist, such as the house sparrow, which can thrive in a variety of environments.
Many habitats are only seasonally productive for birds. The arctic tundra, for example, teams with birds during the short summer season, when food and water are plentiful. In the winter, however, this habitat is too cold and dry for all but a few species. Many bird species respond to such seasonal changes by undergoing annual migrations. Many bird species that breed in the United States and Canada move south to winter in Central or northern South America. Similar migrations from temperate regions to tropical ones exist between Europe and Africa, northeastern Asia and southeast Asia and India and, to a lesser degree, from southern Africa and southern South America to the equatorial parts of those continents.
Scientists disagree about many aspects of the evolution of birds. Many paleontologists (scientists who study fossils to learn about prehistoric life) believe that birds evolved from small, pillaging dinosaurs called theropods. These scientists say that many skeletal features of birds, such as light, hollow bones and a furculum, were present in theropod dinosaurs before the evolution of birds. Others, however, think that birds evolved from an earlier type of reptile called the codonts-a group that ultimately produced dinosaurs, crocodiles, and the flying reptiles known as pterosaurs. These scientists assert that similarities between birds and theropod dinosaurs are due to a phenomenon called convergent evolution-the evolution of similar traits among groups of organisms that are not necessarily related.
Scientists also disagree about how flight evolved. Some scientists believe that flight first occurred when the ancestors of birds climbed trees and glided down from branches. Others theorize that bird flight began from the ground up, when dinosaurs or reptiles ran along the ground and leaped into the air to catch insects or to avoid predators. Continued discovery and analysis of fossils will help clarify the origins of birds.
Despite uncertainties about bird evolution, scientists do know that many types of birds lived during the Cretaceous Period, which dates to about 138 million to 65 million years ago. Among these birds was Ichthyornis Victor, which resembled a gull and had vertebrae similar to those of a fish, and Hesperonis regalis, which was nearly wingless and had vertebrae like those of today’s birds. Most birds of the Cretaceous Period are thought to have died out in the mass extinctions-deaths of many animal species, which took place at the end of the Cretaceous Period.
The remains of prehistoric plants and animals buried and preserved in sedimentary rock or trapped in amber or other deposits of ancient organic matter, provided a record of the history of life on Earth. Scientists who subject in the field of fossil records are called paleontologists, for which having learnt those extinguishing natural archeological remains, are an ongoing phenomenons. In fact, the hundreds of millions of species that have lived on Earth over the past 3.8 billion years, more than 99 percent are already extinct. Some of this happens as the natural result of competition between species and is known as natural selection. According to natural selection, living things must compete for food and space. They must evade the ravages of predators and disease while dealing with unpredictable shifts in their environment. Those species incapable of adapting are faced with imminent extinction. This constant rate of extinction, sometimes called background extinction, is like a slowly ticking clock. First one species, then another becomes extinct, and new species appear almost at random as geological time goes by. Normal rates of background extinction are usually about five families of organisms lost per million years.
More recently, paleontologists have discovered that not all extinction is slow and gradual. At various times in the fossil record, many different, unrelated species became extinct at nearly the same time. The cause of these large-scale extinctions is always dramatic environmental change that produces conditions too severe for organisms to endure. Environmental changes of this caliber result from extreme climatic change, such as the global cooling observed during the ice ages, or from catastrophic events, such as meteorite impacts or widespread volcanic activity. Whatever their causes, these events dramatically alter the composition of life on Earth, as entire groups of organisms disappear and entirely new groups rise to take their place.
In its most general sense, the term mass extinction refers to any episode of multiple loss of species. Nonetheless, the term is generally reserved for truly global extinction events-events in which extensive species loss occurs in all ecosystems on land and in the sea, affecting every part of the Earth's surface. Scientists recognize five such mass extinctions in the past 500 million years. The first occurred around 438 million years ago in the Ordovician Period. At this time, more than 85 percent of the species on Earth became extinct. The second took place 367 million years ago, near the end of the Devonian Period, when 82 percent of all species were lost. The third and greatest mass extinction to date occurred 245 million years ago at the end of the Permian Period. In this mass extinction, as many as 96 percent of all species on Earth were lost. The devastation was so great that paleontologists use this event to mark the end of the ancient, or Paleozoic Era, and the beginning of the middle, or Mesozoic Era, when many new groups of animals evolved.
About 208 million years ago near the end of the Triassic Period, the fourth in mass extinction claimed 76 percent of the species alive at the time, including many species of amphibians and reptiles. The fifth and most recent mass extinction occurred about 65 million years ago at the end of the Cretaceous Period and resulted in the loss of 76 percent of all species, most notably the dinosaurs.
Many geologists and paleontologists speculate that this fifth mass extinction occurred when one or more meteorites struck the Earth. They believe the impact created a dust cloud that blocked much of the sunlight-seriously altering global temperatures and disrupting photosynthesis, the process by which plants derive energy. As plants died, organisms that relied on them for food also disappeared. Supporting evidence for this theory comes from a buried impact crater in the Yucatán Peninsula of Mexico. Measured at 200 km. (124 mi.) in diameter, this huge crater is thought to be the result of a large meteorite striking the Earth. A layer of the element iridium in the geologic sediment from this time provides additional evidence. Unusual in such quantities on Earth, iridium is common in extraterrestrial bodies, and theory supporters suggest iridium travelled to Earth on a meteorite.
Other scientists suspect that widespread volcanic activity in what is now India and the Indian Ocean may have been the source of the atmospheric gases and dust that blocked sunlight. Ancient volcanoes could have been the source of the unusually high levels of iridium, and advocates of this theory point out that iridium is still being released today by at least one volcano in the Indian Ocean. No matter what the cause, the extinction at the end of the Cretaceous Period was so great that scientists use this point in time to divide the Mesozoic Era (also called the Age of Reptiles) from the Cenozoic Era (otherwise known as the Age of Mammals).
Historically biologists-most famous among them British naturalist Charles Darwin-assumed that extinction is the natural outcome of competition between newly evolved, adaptively superior species and their older, more primitive ancestors. These scientists believed that newer, and more peremptorily evolved species simply drove less well-adapted species to extinction. That is, historically, extinction was thought to result from evolution. It was also thought that this process happens in a slow and regular manner and occurs at different times in different groups of organisms.
In the case of background extinction, this holds true. An average of three species becomes extinct every million years, usually because of the forces of natural selection. When this happens, appraisive species, characteristically differs only slightly from the organisms that disappeared-rise to take their places, creating evolutionary lineages of related species. The modern horse, for example, comes from a long evolutionary lineage of related, but now extinct, species. The earliest known horse had four toes on its front feet, three toes on its rear feet, and weighed just 36 kg. (80 lb.). About 45 million years ago, this horse became extinct. It was succeeded by other types of horses with different characteristics, such as teeth better shaped for eating different plants, which made them better in agreement or accorded with their owing environments. This pattern of extinction and the ensuing rise of related species continued for more than 55 million years, ultimately resulting in the modern horse and its relatives the zebras and asses.
In mass extinctions, entire groups of species-such as families, orders, and classes-die out, creating opportunities for the survivors to exploit new habitats. In their new niches, the survivors evolve new characteristics and habits and, consequently, develop into entirely new species. What this course of events means is that mass extinctions are not the result of the evolution of new species, but actually a cause of evolution. Fossils from periods of mass extinction suggest that most new species evolve after waves of extinction. Mass extinctions cause periodic spurts of evolutionary change that shake up the dynamics of life on Earth.
This is perhaps best shown in the development of our own ancestors, the early mammals. Before the fall of the dinosaurs, which had dominated Earth for more than 150 million years, mammals were small, nocturnal, and secretive. They devoted much of their time and energy to evading meat-eating dinosaurs. With the extinction of dinosaurs, the remaining mammals moved into habitats and ecological niches previously dominated by the dinosaurs. Over the next 200 million years, those early mammals evolved into a variety of species, assuming many ecological roles and rising to dominate the Earth as the dinosaurs had before them.
Most scientists agree that life on Earth is now faced with the most severe extinction episode since the event that drove the dinosaurs extinct. No one knows exactly how many species are being lost because no one knows exactly how many species exist on Earth. Estimates vary, but the most widely accepted figure lies between 10 and 13 million species. Of these, biologists estimate that as many as 27,000 species are becoming extinct each year. This translates into an astounding three varieties every hour.
Instead of global climate change, humans are the cause of this latest mass extinction. With the invention of agriculture some 10,000 years ago, humans began destroying the world's terrestrial ecosystems to produce farmland. Today pollution destroys ecosystems even in remote deserts and in the world’s deepest oceans. In addition, we have cleared forests for lumber, pulp, and firewood. We have harvested the fish and shellfish of the world's largest lakes and oceans in volumes that make it impossible for populations to recover fast enough to meet our harvesting needs. Everywhere we go, whether on purpose or by accident, we have brought along species that disrupt local ecosystems and, in many cases, drive native species extinct. For instance, Nile perch were intentionally introduced to Lake Victoria for commercial fishing in 1959. This fish proved to be an efficient predator, driving 200 rare species of cichlid fishes to extinction.
This sixth extinction, as it has become known, poses a great threat to our continued existence on the planet. As the sum of all species living in the world's ecosystems, knew as biodiversity, least of mention, losing substance under which a great deal is excessively generically set to one side by much of the resourcefulness from which we depend. Humans use at least 40,000 different plants, animals, fungi, bacteria, and virus species for food, clothing, shelter, and medicines. In addition, the fresh airs we breathe, the water we drink, cook, and wash with, as many chemicals cycles-including the nitrogen cycle and the carbon cycle, so vital to sustain life-depend on the continued health of ecosystems and the species within them.
The list of victims of the sixth extinction grows by the year. Forever lost are the penguin-like great auk, the passengers’ pigeon, the zebra-like quagga, the thylacine, the Balinese tiger, the ostrich-like moa, and the tarpan, a small species of wild horse, to name but a few. More than 1,000 plants and animals are threatened by extinction. Each of these organisms has unique attributes-some of which may hold the secrets to increasing world food supplies, eradicating water pollution, or curing disease. A subspecies of the endangered chimpanzee, for example, has recently been identified as the probable origin of the human immunodeficiency virus, the virus that causes acquired immunodeficiency syndrome (AIDS). All the same, these animals are widely hunted in their west African habitat, and just as researchers learn of their significance to the AIDS epidemic, the animals face extinction. If they become extinct, they will take with them many secrets surrounding this devastating disease.
In the United States, legislation to protect endangered species from impending extinction includes the Endangered Species Act of 1973. The Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), established in 1975, enforces the prohibition of trading of threatened plants and animals between countries. The Convention on Biological Diversity, an international treaty developed in 1992 at the United Nations Conference on the Environment and Development, obligates more than 160 countries to take action to protect plant and animal species.
Scientists meanwhile are intensifying their efforts to describe the species of the world. So far biologists have identified and named around 1.75 million species-a mere fraction of the species believed to exist today. Of those identified, special attention is given to species at or near the brink of extinction. The World Conservation Union (IUCN) maintains an active list of endangered plants and animals called the Red List. In addition, captive breeding programs at zoos and private laboratories are dedicated to the preservation of endangered species. Participants in these programs breed members of different populations of endangered species to increase their genetic diversity, thus enabling the species to cope with further threats to their numbers better.
All these programs together have had some notable successes. The peregrine falcon, nearly extinct in the United States due to the widespread use of the pesticide DDT, rebounded strongly after DDT was banned in 1973. The brown pelican and the bald eagle offer similar success stories. The California condor, a victim of habitat destruction, was bred in captivity, and small numbers of them are now being released back into the wild.
Growing numbers of legislators and conservation biologists, scientists who specialize in preserving and nurturing biodiversity, are realizing that the primary cause of the current wave of extinction is habitat destruction. Efforts have accelerated to identify ecosystems at greatest risk, including those with high numbers of critically endangered species. Programs to set aside large tracts of habitats, often interconnected by narrow zones or corridors, offer the best hope yet of sustaining ecosystems, and by that most of the world's species.
Nevertheless, the Tertiary Period directly following the Cretaceous witnessed an explosive evolution of birds. One bird that lived during the Tertiary Period was Diatryma, which stood 1.8 to 2.4 m. (about six to 8 ft.) tall and had massive legs, a huge bill, and very small, underdeveloped wings. Most modern families of birds can be traced back in the fossil record to the early or mid-Eocene Epoch-a stage within the Tertiary Period that occurred about 50 million years ago. Perching birds, called passerines, experienced a tremendous growth in species diversity in the latter part of the Tertiary; today this group is the most diverse order of birds.
During the Pleistocene Epoch, from 1.6 million to 10,000 years ago, also known as the Ice Age, glacier ice spread over more than one-fourth of the land surfaces of the earth. These glaciers isolated many groups of birds from other groups with which they had previously interbred. Scientists have long assumed that the resulting isolated breeding groups evolved into the species of birds that exist today. This assumption has been modified from studies involving bird DNA within cellular components called mitochondria. Pairs of species that only recently diverged from a shared ancestry are expected to have more similar mitochondrial DNA than are pairs that diverged in the more distant past. Because mutations in mitochondrial DNA are thought to occur at a fixed rate, some scientists believe that this DNA can be interpreted as a molecular clock that reveals the approximate amount of time that has elapsed since two species diverged from one another. Studies of North American songbirds based on this approach suggest that only the earliest glaciers of the Pleistocene are likely to have played a role in shaping bird species.
The evolution of birds has not ended with the birds that we know today. Some bird species are dying out. In addition, the process of Speciation, evolutionary changes that resultant products in some newer species-continues always.
Birds have been of ecological and economic importance to humans for thousands of years. Archaeological sites reveal that prehistoric people used many kinds of birds for food, ornamentation, and other cultural purposes. The earliest domesticated bird was probably the domestic fowl or chicken, derived from jungle fowls of Southeast Asia. Domesticated chickens existed even before 3000 Bc. Other long-domesticated birds are ducks, geese, turkeys, guineas-fowl, and pigeons.
Today the adults, young, and eggs of both wild and domesticated birds provide humans with food. People in many parts of Asia even eat nests that certain swiftlets in southeastern Asia construct out of saliva. Birds give us companionship as pets, assume religious significance in many cultures, and, with hawks and falcons, perform work for us as hunters. People in maritime cultures have learned to monitor seabird flocks to find fish, sometimes even using cormorants to do the fishing.
Birds are good indicators of the quality of our environment. In the 19th century, coal miners brought caged canaries with them into the mines, knowing that if the birds stopped singing, dangerous mine gases had escaped into the air and poisoned them. Birds provided a comparable warning to humans in the early 1960s, when the numbers of peregrine falcons in the United Kingdom and raptors in the United States suddenly declined. This decline was caused by organochlorine pesticides, such as DDT, which were accumulating in the birds and causing them to produce eggs with overly fragile shells. This decline in the bird populations alerted humans to the possibility that pesticides can harm people as well. Today certain species of birds are considered indicators of the environmental health of their habitats. An example of an indicator bird is the northern spotted owl, which can only reproduce within old growth forests in the Pacific Northwest.
Many people enjoy bird-watching. Equipped with binoculars and field guides, they identify birds and their songs, often keeping lists of the various species they have witnessed. Scientists who study birds are known as ornithologists. These experts investigate many behaviours, and, evolutionary histories, ecology, set-classification, and species distributed of both domesticated and rashly or wild birds.
Overall, birds pose little direct danger to humans. A few birds, such as the cassowaries of New Guinea and northeastern Australia, can kill humans with their strong legs and bladelike claws, but actual attacks are extremely rare. Many birds become quite aggressive when defending a nest site; humans are routinely attacked, and occasionally killed, by hawks engaging in such defence. Birds pose a greater threat to human health as carriers of diseases. Diseases carried by birds that can affect humans include influenza and psittacosis.
Negative impacts by birds on humans are primarily economic. Blackbirds, starlings, sparrows, weavers, crows, parrots, and other birds may seriously deplete crops of fruit and grain. Similarly, fish-eating birds, such as cormorants and herons, may adversely influence aquacultural production. However, the economic benefits of wild birds to humans are well documented. Many birds help humans, especially farmers, by eating insects, weeds, slugs, and rodents.
Although birds, with some exceptions, are tremendously beneficial to humans, humans have a long history of causing harm to birds. Studies of bone deposits on some Pacific islands, including New Zealand and Polynesia, suggest that early humans hunted many hundreds of bird species to extinction. Island birds have always been particularly susceptible to predation by humans. Because these birds have largely evolved without land-based predators, they are tame and in many descriptions are flightless. They are therefore easy prey for humans and the animals that accompany them, such as rats. The dodo, a flightless pigeon-like bird on the island of Mauritius in the Indian Ocean, was hunted to extinction by humans in the 1600s.
With colonial expansion and the technological advances of the 18th and 19th centuries, humans hunted birds on an unprecedented scale. This time period witnessed the extinction of the great auk, a large flightless seabird of the North Atlantic Ocean that was easily killed by sailors for food and oil. The Carolina parakeet also became extinct in this intermittent interval of time, although the last one of these birds survived in the Cincinnati Zoo until 1918.
In the 20th century, a time of explosive growth in human populations, the major threats to birds have been the destruction and modification of their habitats. The relentless clearing of hardwood forests outweighed even relentless hunting as the cause of the extinction of the famous passengers’ pigeon, whose eastern North American populations may have once numbered in the billions. The fragmentation of habitats into small parcels is also harmful to birds, because it increases their vulnerability to predators and parasites.
Habitat fragmentation and reduction particularly affect songbirds that breed in North America in the summer and migrate to Mexico, the Caribbean, Central America, and Colombia for the winter. In North America, these birds suffer from forest fragmentation caused by the construction of roads, housing developments, and shopping malls. In the southern part of their range, songbirds are losing traditional nesting sites as tropical forests are destroyed and shade trees are removed from coffee plantations.
Pesticides, pollution, and other poisons also threaten today’s birds. These substances may kill birds outright, limit their ability to reproduce, or diminish their food supplies. Oil spills have killed thousands of aquatic birds, because birds with oil-drenched feathers cannot fly, float, or stay warm. Acid rain, caused by chemical reactions between airborne pollutants and water and an oxygen providence in the atmosphere, has decreased the food supply of many birds that feed on fish or other aquatic life in polluted lakes. Many birds are thought to be harmed by selenium, mercury, and other toxic elements present in agricultural runoff and in drainage from mines and power plants. For example, loons in the state of Maine may be in danger due to mercury that drifts into the state from unregulated coal-fired power plants in the Midwest and other sources. Global warming, an increase in the earth’s temperature due to a buildup of greenhouse gases, is another potential threat to birds.
Sanctuaries for birds exist all over the world-two examples are the Bharatpur Bird Sanctuaries in India’s Keoladeo National Park, which protects painted storks, gray herons, and many other bird species, and the National Wildlife Refuge system of the United States. In North America, some endangered birds are bred in settings such as zoos and specialized animal clinics and later released into the wild. Such breeding programs have added significantly to the numbers of whooping cranes, peregrine falcons, and California condors. Many countries, including Costa Rica, are finding they can reap economic benefits, including the promotion of tourism, by protecting the habitats of birds and other wildlife.
The protection of the earth’s birds will require more than a single strategy. Many endangered birds need a combination of legal protections, habitat management, and control of predators and competitors. Ultimately, humans must decide that the bird’s world is worth preserving along with our own.
Most people did not understand the true nature of fossils until the beginning of the 19th century, when the basic principles of modern geology were established. Since about 1500 AD., scholars had engaged in a bitter controversy over the origin of fossils. One group held that fossils are the remains of prehistoric plants and animals. This group was opposed by another, which declared that fossils were either freaks of nature or creations of the devil. During the 18th century, many people believed that all fossils were relics of the great flood recorded in the Bible.
Paleontologists gain most of their information by studying deposits of sedimentary rocks that formed in strata over millions of years. Most fossils are found in sedimentary rock. Paleontologists use fossils and other qualities of the rock to compare strata around the world. By comparing, they can determine whether strata developed during the same time or in the same type of environment. This helps them assemble a general picture of how the earth evolved. The study and comparison of different strata are called stratigraphy.
Fossils sustain most of the data on which strata are compared. Some fossils, called index fossils, are especially useful because they have a broad geographic range but a narrow temporal one-that is, they represent a species that was widespread but existed for a brief period of time. The best index fossils tend to be marine creatures. These animals evolved rapidly and spread over large areas of the world. Paleontologists divide the last 570 million years of the earth's history into eras, periods, and epochs. The part of the earth's history before about 570 million years ago is called Precambrian time, which began with the earth's birth, probably more than four billion years ago.
The earliest evidence of life consists of microscopic fossils of bacteria that lived as early as 3.6 billion years ago. Most Precambrian fossils are very tiny. Most species of larger animals that lived in later Precambrian time had soft bodies, without shells or other hard body parts that would create lasting fossils, the first abundant fossils of larger animals had been dated from around 600 million years ago. The Paleozoic era lasted to about 330 million years. It includes the Cambrian, Ordovician, Silurian, Devonian, Carboniferous, and Permian periods. Index fossils of the first half of the Paleozoic era are those of the invertebrates, such as trilobites, graptolites, and crinoids. Remains of plants and such vertebrates as fish and reptiles make up the index fossils of the second half of this era.
At the beginning of the Cambrian period (570 million to 500 million years ago) animal life was entirely confined to the seas. By the end of the period, all the phyla of the animal kingdom existed, except vertebrates. The characteristic animals of the Cambrian period were the trilobites, a primitive form of arthropod, which reached their fullest development in this period and became extinct by the end of the Paleozoic era. The earliest snails appeared in this period, as did the cephalopod mollusks. Other groups represented in the Cambrian period were brachiopods, bryozoans, and foraminifers. Plants of the Cambrian period included seaweeds in the oceans and lichens on land.
The most characteristic animals of the Ordovician period (500 million to 435 million years ago) were the graptolites, which were small, colonial hemichordates (animals possessing an anical structure suggesting part of a spinal cord). The first vertebrates-primitive fish-and the earliest corals emerged during the Ordovician period. The largest animal of this period was a cephalopod mollusk that had a shell about three m.’s (about 10 ft.) in length. Plants of this period resembled those of the Cambrian periods.
The most important evolutionary development of the Silurian period (435 million to 410 million years ago) was that of the first air-breathing animal, a scorpion. Fossils of this creature have been found in Scandinavia and Great Britain. The first fossil records of vascular plants-that are, land plants with tissue that carries food-appeared in the Silurian period. They were simple plants that had not developed separate stems and leaves.
The dominant forms of animal life in the Devonian period (410 million to 360 million years ago) were fish of various types, including sharks, lungfish, armoured fish, and primitive forms of ganoid (hard-scaled) fish that were probably the evolutionary ancestors of amphibians. Fossil remains found in Pennsylvania and Greenland suggest that early forms of amphibia may already have existed during the Devonian period. Early animal forms included corals, starfish, sponges, and trilobites. The earliest known insect was found in Devonian rock.
The Devonian is the first period from which any considerable number of fossilized plants have been preserved. During this period, the first woody plants developed, and by the end of the period, land-growing forms included seed ferns, ferns, scouring rushes, and scale trees, the modern relative of club moss. Although the present-day equivalents of these groups are mostly small plants, they developed into treelike forms in the Devonian period. Fossil evidence shows that forests existed in Devonian times, and petrified stumps of some larger plants from the period measure about 60 cm. (about 24 in.) in diameter.
The Carboniferous period lasted from 360 million to 290 million years ago. During the first part of this period, sometimes called the Mississippian period (360 million to 330 million years ago), the seas contained a variety of echinoderms and foraminifers, and most forms of animal life that appeared in the Devonian. A group of sharks, the Cestraciontes, or shell-crushers were dominant among the larger marine animals. The predominant group of land animals was the Stegocephalia, an order of primitive, lizard-like amphibians that developed from the lungfish. The various forms of land plants became diversified and grew larger, particularly those that grew in low-laying swampy areas.
The second part of the Carboniferous, sometimes called the Pennsylvanian period (330 million to 290 million years ago), saw the evolution of the first reptiles, a group that developed from the amphibians and lived entirely on land. Other land animals included spiders, snails, scorpions, more than 800 species of cockroaches, and the largest insect ever evolved, a species resembling the dragonfly, with a wingspread of about 74 cm. (about 29 in.). The largest plants were the scale trees, which had tapered trunks that measured as much as 1.8 m.’s (6 ft.) in diameter at the base and 30 m.’s (100 ft.) in height. Primitive gymnosperms known as cordaites, which had pithy stems surrounded by a woody shell, were more slender but even taller. The first true conifers, forms of advanced gymnosperms, also developed during the Pennsylvanian period.
The chief events of the Permian period (290 million to 240 million years ago) were the disappearance of many forms of marine animals and the rapid spread and evolution of the reptiles. Usually, Permian reptiles were of two types: lizard-like reptiles that lived entirely on land, and sluggish, semiaquatic types. A comparatively small group of reptiles that evolved in this period, the Theriodontia, were the ancestors of mammals. Most vegetation of the Permian period was composed of ferns and conifers.
The Mesozoic era is often called the Age of Reptiles, because the reptile class was dominant on land throughout the age The Mesozoic era lasted to about 175 million years, and includes the Triassic, Jurassic, and Cretaceous periods. Index fossils from this era include a group of extinct cephalopods called ammonites, and extinct forms of sand dollars and sea urchins.
The most notable of the Mesozoic reptiles, the dinosaur, first evolved in the Triassic period (240 million to 205 million years ago). The Triassic dinosaurs were not as large as their descendants in later Mesozoic times. They were comparatively slender animals that ran on their hind feet, balancing their bodies with heavy, fleshy tails, and seldom exceeded 4.5 m. (15 ft.) in length. Other reptiles of the Triassic period included such aquatic creatures as the ichthyosaurs, and a group of flying reptiles, the pterosaurs.
The first mammals also appeared during this period. The fossil remains of these animals are fragmentary, but the animals were apparently small and reptilian in appearance. In the sea, Teleostei, the first ancestors of the modern bony fishes, made their appearance. The plant life of the Triassic seas included a large variety of marine algae. On land, the dominant vegetation included various evergreens, such as ginkgos, conifers, and palms. Small scouring rushes and ferns still existed, but the larger members of these groups had become extinct.
During the Jurassic period (205 million to 138 million years ago), dinosaurs continued to evolve in a wide range of size and diversity. Types included heavy four-footed sauropods, such as Apatosaurus (formerly Brontosaurus); two-footed carnivorous dinosaurs, such as Allosaurus; two-footed vegetarian dinosaurs, such as Camptosaurus; and four-footed armoured dinosaurs, such as Stegosaurus. Winged reptiles included the pterodactyl, which, during this period, ranged in size from extremely small species to those with wingspreads of 1.2 m.’s (4 ft.). Marine reptiles included plesiosaurs, a group that had broad, flat bodies like those of turtles, with long necks and large flippers for swimming, Ichthyosauria, which resembled dolphins, or at times they appear like primitive crocodiles.
The mammals of the Jurassic period consisted of four orders, all of which were smaller than small modern dogs. Many insects of the modern orders, including moths, flies, beetles, grasshoppers, and termites appeared during the Jurassic period. Shellfish included lobsters, shrimp, and ammonites, and the extinct group of belemnites, which resembled squid and had cigar-shaped internal shells. Plant life of the Jurassic period was dominated by the cycads, which resembled thick-stemmed palms. Fossils of most species of Jurassic plants are widely distributed in temperate zones and polar regions, suggesting that the climate was uniformly mild.
The reptiles were still the dominant form of animal life in the Cretaceous period (138 million to 65 million years ago). The four types of dinosaurs found in the Jurassic also lived during this period, and a fifth type, the horned dinosaurs, also appeared. By the end of the Cretaceous, about 65 million years ago, all these creatures had become extinct. The largest of the pterodactyls lived during this period. Pterodactyl fossils discovered in Texas have wingspreads of up to 15.5 m’s (50 ft). Other reptiles of the period include the first snakes and lizards. Several types of Cretaceous birds have been discovered, including Hesperornis, a diving bird about 1.8 m’s (about 6 ft) in length, which had only vestigial wings and was unable to fly. Mammals of the period included the first marsupials, which strongly resembled the modern opossum, and the first placental mammals, which belonged to the group of insectivores. The first crabs developed during this period, and several modern varieties of fish also evolved.
The most important evolutionary advance in the plant kingdom during the Cretaceous period was the development of deciduous plants, the earliest fossils of which appear in early Cretaceous rock formations. By the end of the period, many modern varieties of trees and shrubs had made their appearance. They represented more than 90 percent of the known plants of the period. Mid-Cretaceous fossils include remains of beech, holly, laurel, maple, oak, plane tree, and walnut. Some paleontologists believe that these deciduous woody plants first evolved in Jurassic times but grew only in upland areas, where conditions were unfavourable for fossil preservation.
The Cenozoic era (65 million years ago to the present time) is divided into the Tertiary period (65 million to 1.6 million years ago) and the Quaternary period (1.6 million years ago to the present). However, because scientists have so much more information about this era, they tend to focus on the epochs that make up each period. During the first part of the Cenozoic era, an abrupt transition from the Age of Reptiles to the Age of Mammals occurred, when the large dinosaurs and other reptiles that had dominated the life of the Mesozoic era disappeared.
The Paleocene epoch (65 million to 55 million years ago) marks the beginning of the Cenozoic era. Seven groups of Paleocene mammals are known. All of them appear to have developed in northern Asia and to have migrated to other parts of the world. These primitive mammals had many features in common. They were small, with no species exceeding the size of a small modern bear. They were four-footed, with five toes on each foot, and they walked on the soles of their feet. Most of them had slim heads with narrow muzzles and small brain cavities. The predominant mammals of the period were members of three groups that are now extinct. They were the creodonts, which were the ancestors of modern carnivores; the amblypods, which were small, heavy-bodied animals; and the condylarths, which were light-bodied herbivorous animals with small brains. The Paleocene groups that have survived are the marsupials, the insectivores, the primates, and the rodents.
During the Eocene epoch (55 million to 38 million years ago), several ancestors direct evolutionary modern animals appeared. Among these animals-all of which were small in stature-were the horse, rhinoceros, camel, rodent, and monkey. The creodonts and amblypods continued to develop during the epoch, but the condylarths became extinct before it ended. The first aquatic mammals, ancestors of modern whales, also appeared in Eocene times, as did such modern birds as eagles, pelicans, quail, and vultures. Changes in vegetation during the Eocene epoch were limited chiefly to the migration of types of plants in response to climate changes.
During the Oligocene epoch (38 million to 24 million years ago), most of the archaic mammals from earlier epochs of the Cenozoic era disappeared. In their place appeared representatives of several modern mammalian groups. The creodonts became extinct, and the first true carnivores, resembling dogs and cats, evolved. The first anthropoid apes also lived during this time, but they became extinct in North America by the end of the epoch. Two groups of animals that are now extinct flourished during the Oligocene epoch: the titanotheres, which are related to the rhinoceros and the horse; and the oreodonts, which were small, dog-like, grazing animals.
The development of mammals during the Miocene epoch (24 million to five million years ago) was influenced by an important evolutionary development in the plant kingdom: the first appearance of grasses. These plants, which were ideally suited for forage, encouraged the growth and development of grazing animals such as horses, camels, and rhinoceroses, which were abundant during the epoch. During the Miocene epoch, the mastodon evolved, and in Europe and Asia a gorilla-like ape, Dryopithecus, was common. Various types of carnivores, including cats and wolflike dogs, ranged over many parts of the world.
The paleontology of the Pliocene epoch (five million to 1.6 million years ago) does not differ much from that of the Miocene, although the period is regarded by many zoologists as the climax of the Age of Mammals. The Pleistocene Epoch (1.6 million to 10,000 years ago) in both Europe and North America was marked by an abundance of large mammals, most of which were fundamentally forward-moving in type. Among them were buffalo, elephants, mammoths, and mastodons. Mammoths and mastodons became extinct before the end of the epoch. In Europe, antelope, lions, and hippopotamuses also appeared. Carnivores included badgers, foxes, lynx, otters, pumas, and skunks, and now-extinct species such as the giant saber-toothed tigers. In North America, the first bears made their appearance as migrants from Asia. The armadillo and ground sloth migrated from South America to North America, and the musk-ox ranged southward from the Arctic regions. Modern human beings also emerged during this epoch.
Earth is one of nine planets in the solar system, the only planet known to harbor life, and the ‘home’ of human beings. From space Earth resembles a big blue marble with swirling white clouds floating above blue oceans. About 71 percent of Earth’s surface is covered by water, which is essential to life. The rest is land, mostly as continents that rise above the oceans.
Earth’s surface is surrounded by a layer of gases known as the atmosphere, which extends upward from the surface, slowly thinning out into space. Below the surface is a hot interior of rocky material and two core layers composed of the metals nickel and iron in solid and liquid form.
Unlike the other planets, Earth has a unique set of characteristics ideally suited to supporting life as we know it. It is neither too hot, like Mercury, the closest planet to the Sun, nor too cold, like distant Mars and the even more distant outer planets-Jupiter, Saturn, Uranus, Neptune, and tiny Pluto. Earth’s atmosphere includes just the right degree of gases that trap heat from the Sun, resulting in a moderate climate suitable for water to exist in liquid form. The atmosphere also helps block radiation from the Sun that would be harmful to life. Earth’s atmosphere distinguishes it from the planet Venus, which is otherwise much like Earth. Venus is about the same size and mass as Earth, not either too far or nearer from the Sun. Nevertheless, because Venus has too much heat-trapping carbon dioxide in its atmosphere, its surface is extremely hot-462°C’s (864°F)-hot enough to melt lead and too hot for life to exist.
Although Earth is the only planet known to have life, scientists do not rule out the possibility that life may once have existed on other planets or their moons, or may exist today in primitive form. Mars, for example, has many features that resemble river channels, suggesting that liquid water once flowed on its surface. If so, life may also have evolved there, and evidence for it may one day be found in fossil form. Water still exists on Mars, but it is frozen in polar ice caps, in permafrost, and possibly in rocks below the surface.
For thousands of years, human beings could only wonder about Earth and the other observable planets in the solar system. Many early ideas, for example, that the Earth was a sphere and that it travelled around the Sun were based on brilliant reasoning. However, it was only with the development of the scientific method and scientific instruments, especially in the 18th and 19th centuries, that humans began to gather data that could be used to verify theories about Earth and the rest of the solar system. By studying fossils found in rock layers, for example, scientists realized that the Earth was much older than previously believed. With the use of telescopes, new planets such as Uranus, Neptune, and Pluto were discovered.
In the second half of the 20th century, more advances in the study of Earth and the solar system occurred due to the development of rockets that could send spacecraft beyond Earth. Human beings can study and observe Earth from space with satellites equipped with scientific instruments. Astronauts landed on the Moon and gathered ancient rocks that revealed much about the early solar system. During this remarkable advancement in human history, humans also sent unmanned spacecraft to the other planets and their moons. Spacecraft have now visited all of the planets except Pluto. The study of other planets and moons has provided new insights about Earth, just as the study of the Sun and other stars like it has helped shape new theories about how Earth and the rest of the solar system formed.
From this recent space exploration, we now know that Earth is one of the most geological activities unbound of all the planets and moons in the solar system. Earth is constantly changing. Over long periods land is built up and worn away, oceans are formed and re-formed. Continents move around, break up, and merge.
Life itself contributes to changes on Earth, especially in, and the way living things can alter Earth’s atmosphere. For example, Earth at once had the same amount of carbon dioxide in its atmosphere as Venus now has, but early forms of life helped remove this carbon dioxide over millions of years. These life forms also added oxygen to Earth’s atmosphere and made it possible for animal life to evolve on land.
A variety of scientific fields have broadened our knowledge about Earth, including biogeography, climatology, geology, geophysics, hydrology, meteorology, oceanography, and zoogeography. Collectively, these fields are known as Earth science. By studying Earth’s atmosphere, its surface, and its interior and by studying the Sun and the rest of the solar system, scientists have learned much about how Earth came into existence, how it changed, and why it continues to change.
Earth is the third planet from the Sun, after Mercury and Venus. The average distance between Earth and the Sun is 150 million km. (93 million mi). Earth and all the other planets in the solar system revolve, or orbit, around the Sun due to the force of gravitation. The Earth travels at a velocity of about 107,000 km./h. (about 67,000 mph.) as it orbits the Sun. All but one planet orbit the Sun in the same plane-that is, if an imaginary line were extended from the centre of the Sun to the outer regions of the solar system, the orbital paths of the planets would intersect that line. The exception is Pluto, which has an eccentric (unusual) orbit.
Earth’s orbital path is not quite a perfect circle but instead is elliptical (oval-shaped). For example, at maximum distance Earth is about 152 million km. (about 95 million mi.) from the Sun; at minimum distance Earth is about 147 million km (about 91 million mi.) from the Sun. If Earth orbited the Sun in a perfect circle, it would always be the same distance from the Sun.
The solar system, in turn, is part of the Milky Way Galaxy, a collection of billions of stars bound together by gravity. The Milky Way has arm-like discs of stars that spiral out from its centre. The solar system is found in one of these spiral arms, known as the Orion arm, which is about two-thirds of the way from the centre of the Galaxy. In most parts of the Northern Hemisphere, this disc of stars is visible on a summer night as a dense band of light known as the Milky Way.
Earth is the fifth largest planet in the solar system. Its diameter, measured around the equator, is 12,756 km (7,926 mi). Earth is not a perfect sphere but is slightly flattened at the poles. Its polar diameter, measured from the North Pole to the South Pole, is in a measure less than the equatorial diameter because of this flattening. Although Earth is the largest of the four planets-Mercury, Venus, Earth, and Mars-that makes up the inner solar system (the planets closest to the Sun), it is small compared with the giant planets of the outer solar system-Jupiter, Saturn, Uranus, and Neptune. For example, the largest planet, Jupiter, has a diameter at its equator of 143,000 km (89,000 mi), 11 times greater than that of Earth. A famous atmospheric feature on Jupiter, the Great Red Spot, is so large that three Earths would fit inside it.
Earth has one natural satellite, the Moon. The Moon orbits the Earth, undivided and compelling of one revolution in an elliptical path in 27 days 7 hr 43 min 11.5-sec. The Moon orbits the Earth because of the force of Earth’s gravity. However, the Moon also exerts a gravitational force on the Earth. Evidence for the Moon’s gravitational influence can be seen in the ocean tides. A popular theory suggests that the Moon split off from Earth more than four billion years ago when a large meteorite or small planet struck the Earth.
As Earth revolves around the Sun, it rotates, or spins, on its axis, an imaginary line that runs between the North and South poles. The period of one complete rotation is defined as a day and takes 23 hr 56 min’s 4.1-sec. The period of one revolution around the Sun is defined as a year, or 365.2422 solar days, or 365 days 5 hr. 48 min.’s 46-sec. Earth also moves along with the Milky Way Galaxy as the Galaxy rotates and moves through space. It indirectly takes by more than 200 million years for the stars in the Milky Way to complete one revolution around the Galaxy’s centre.
Earth’s axis of rotation is inclined (tilted) 23.5° on its plane of revolution around the Sun. This inclination of the axis creates the seasons and causes the height of the Sun in the sky at noon to increase and decrease as the seasons change. The Northern Hemisphere receives the most energy from the Sun when it is tilted toward the Sun. This orientation corresponds to summer in the Northern Hemisphere and winter in the Southern Hemisphere. The Southern Hemisphere receives maximum energy when it is tilted toward the Sun, corresponding to summer in the Southern Hemisphere and winter in the Northern Hemisphere. Fall and spring occur between these orientations.
The atmosphere is a layer of different gases that extends from Earth’s surface to the exosphere, the outer limit of the atmosphere, about 9,600 km. (6,000 mi.) above the surface. Near Earth’s surface, the atmosphere consists almost entirely of nitrogen (78 percent) and oxygen (21 percent). The remaining 1 percent of atmospheric gases consist of argon (0.9 percent); carbon dioxide (0.03 percent); varying amounts of water vapour; and trace amounts of hydrogen, nitrous oxide, ozone, methane, carbon monoxide, helium, neon, krypton, and xenon.
The layers of the atmosphere are the troposphere, the stratosphere, the mesosphere, the thermosphere, and the exosphere. The troposphere is the layer in which weather occurs and extends from the surface to about 16 km (about 10 mi.) above sea level at the equator. Above the troposphere is the stratosphere, which has an upper boundary of about 50 km (about 30 mi) above sea level. The layer from 50 to 90 km (30 to 60 mi.) is called the mesosphere. At an altitude of about 90 km, temperatures begin to rise. The layer that begins at this altitude is called the thermosphere because of the high temperatures that can be reached in this layer (about 1200°C’s, or about 2200°F). The region beyond the thermosphere is called the exosphere. The thermosphere and the exosphere overlap with another region of the atmosphere known as the ionosphere, a layer or layers of ionized air extending from almost 60 km (about 50 mi) above Earth’s surface to altitudes of 1,000 km (600 mi) and more.
Earth’s atmosphere and the way it interacts with the oceans and radiation from the Sun are responsible for the planet’s climate and weather. The atmosphere plays a key role in supporting life. Most life on Earth uses atmospheric oxygen for energy in a process known as cellular respiration, which is essential to life. The atmosphere also helps moderate Earth’s climate by trapping radiation from the Sun that is reflected from Earth’s surface. Water vapour, carbon dioxide, methane, and nitrous oxide in the atmosphere act as ‘greenhouse gases’. Like the glass in a greenhouse, they trap infrared, or heat, radiation from the Sun in the lower atmosphere and by that help warm Earth’s surface. Without this greenhouse effect, heat radiation would escape into space, and Earth would be too cold to support most forms of life.
Other gases in the atmosphere are also essential to life. The trace amount of ozone based in Earth’s stratosphere blocks harmful ultraviolet radiation from the Sun. Without the ozone layer, life as we know it could not survive on land. Earth’s atmosphere is also an important part of a phenomenon known as the water cycle or the hydrologic cycle.
The water cycle simply means that Earth’s water is continually recycled between the oceans, the atmosphere, and the land. All of the water that exists on Earth today has been used and reused for billions of years. Very little water has been created or lost during this period of time. Water is always shifting on the Earth’s surface and changing back and forth between ice, liquid water, and water vapour.
The water cycle begins when the Sun heats the water in the oceans and causes it to evaporate and enter the atmosphere as water vapour. Some of this water vapour falls as precipitation directly back into the oceans, completing a short cycle. Some water vapour, however, reaches land, where it may fall as snow or rain. Melted snow or rain enters rivers or lakes on the land. Due to the force of gravity, the water in the rivers eventually empties back into the oceans. Melted snow or rain also may enter the ground. Groundwater may be stored for hundreds or thousands of years, but it will eventually reach the surface as springs or small pools known as seeps. Even snow that forms glacial ice or becomes part of the polar caps and is kept out of the cycle for thousands of years eventual melts or is warmed by the Sun and turned into water vapour, entering the atmosphere and falling again as precipitation. All water that falls on land eventually return to the ocean, completing the water cycle.
The hydrosphere consists of the bodies of water that cover 71 percent of Earth’s surface. The largest of these are the oceans, which hold more than 97 percent of all water on Earth. Glaciers and the polar ice caps encircle just more than 2 percent of Earth’s water as solid ice. Only about 0.6 percent is under the surface as groundwater. Nevertheless, groundwater is 36 times more plentiful than water found in lakes, inland seas, rivers, and in the atmosphere as water vapour. Only 0.017 percent of all the water on Earth is found in lakes and rivers. A mere 0.001 percent is found in the atmosphere as water vapour. Most of the water in glaciers, lakes, inland seas, rivers, and groundwater is fresh and can be used for drinking and agriculture. Dissolved salts compose about 3.5 percent of the water in the oceans, however, making it unsuitable for drinking or agriculture unless it is treated to remove the salts.
The crust consists of the continents, other land areas, and the basins, or floors, of the oceans. The dry land of Earth’s surface is called the continental crust. It is about 15 to 75 km (nine to 47 mi) thick. The oceanic crust is thinner than the continental crust. Its average thickness is five to 10 km (three to 6 mi). The crust has a definite boundary called the Mohorovicic discontinuity, or simply the Moho. The boundary separates the crust from the underlying mantle, which is much thicker and is part of Earth’s interior.
Oceanic crust and continental crust differ in the type of rocks they contain. There are three main types of rocks: igneous, sedimentary, and metamorphic. Igneous rocks form when molten rock, called magma, cools and solidifies. Sedimentary rocks are usually created by the breakdown of igneous rocks. They have a tendency to form in layers as small particles of other rocks or as the mineralized remains of dead animals and plants that have fused over time. The remains of dead animals and plants occasionally become mineralized in sedimentary rock and are recognizable as fossils. Metamorphic rocks form when sedimentary or igneous rocks are altered by heat and pressure deep underground.
Oceanic crust consists of dark, dense igneous rocks, such as basalt and gabbro. Continental crust consists of lighter coloured, less dense igneous rock, such as granite and diorite. Continental crust also includes metamorphic rocks and sedimentary rocks.
The biosphere can support life. The biosphere ranges from about 10 km (about 6 mi) into the atmosphere to the deepest ocean floor. For a long time, scientists believed that all life depended on energy from the Sun and consequently could only exist where sunlight penetrated. In the 1970s, however, scientists discovered various forms of life around hydrothermal vents on the floor of the Pacific Ocean where no sunlight penetrated. They learned that primitive bacteria formed the basis of this living community and that the bacteria derived their energy from a process called chemosynthesis that did not depend on sunlight. Some scientists believe that the biosphere may extend deeply into the Earth’s crust. They have recovered what they believe are primitive bacteria from deeply drilled holes below the surface.
Earth’s surface has been constantly changing ever since the planet formed. Most of these changes have been gradual, taking place over millions of years. Nevertheless, these gradual changes have resulted in radical modifications, involving the formation, erosion, and re-formation of mountain ranges, the movement of continents, the creation of huge super-continents, and the breakup of super-continents into smaller continents.
The weathering and erosion that result from the water cycle are among the principal factors responsible for changes to Earth’s surface. Another principal factor is the movement of Earth’s continents and sea-floors and the buildup of mountain ranges due to a phenomenon known as plate tectonics. Heat is the basis for all these changes. Heat in Earth’s interior is believed to be responsible for continental movement, mountain building, and the creation of new sea-floor in ocean basins. Heat from the Sun is responsible for the evaporation of ocean water and the resulting precipitation that causes weathering and erosion. In effect, heat in Earth’s interior helps build up Earth’s surface while heat from the Sun helps wear down the surface.
Weathering is the breakdown of rock at and near the surface of Earth. Most rocks originally formed in a hot, high-pressure environment below the surface where there was little exposure to water. Once the rocks reached Earth’s surface, however, they were subjected to temperature changes and exposed to water. When rocks are subjected to these kinds of surface conditions, the minerals they contain tend to change. These changes make up the process of weathering. There are two types of weathering: physical weathering and chemical weathering.
Physical weathering involves a decrease in the size of rock material. Freezing and thawing of water in rock cavities, for example, splits rock into small pieces because water expands when it freezes.
Chemical weathering involves a chemical change in the composition of rock. For example, feldspar, a common mineral in granite and other rocks, reacts with water to form clay minerals, resulting in a new substance with totally different properties than the parent feldspar. Chemical weathering is of significance to humans because it creates the clay minerals that are important components of soil, the basis of agriculture. Chemical feed weathering also causes the exit of dissolved forms of sodium, calcium, potassium, magnesium, and other chemical elements into surface and groundwater water. These elements are carried by surface water and groundwater to the sea and are the sources of dissolved salts in the sea.
Erosion is the process that removes lose and weathered rock and carries it to a new site. Water, wind, and glacial ice combined with the force of gravity can cause erosion.
Erosion by running water is by far the most common process of erosion. It takes place over a longer period of time than other forms of erosion. When water from rain or melted snow moves downhill, it can lend support to lose rock or soil with it. Erosion by running water forms the familiar gullies and V-shaped valleys that cut into most landscapes. The forces of the running water removes lose particles formed by weathering. In the process, gullies and valleys are lengthened, widened, and deepened. Often, water overflows the banks of the gullies or river channels, resulting in floods. Each new flood carries more material away to increase the size of the valley. Meanwhile, weathering loosens ever more material so the process continues.
Erosion by glacial ice is less common, but it can cause the greatest landscape changes in the shortest amount of time. Glacial ice forms in a region where snow fails to melt in the spring and summer and instead builds of a functional dynamic spread of ice. For major glaciers to form, this lack of snowmelt has to occur for many years in areas with high precipitation. As ice accumulates and thickens, it flows as a solid mass. As it flows, it has a tremendous capacity to erode soil and even solid rock. Ice is a major factor in shaping some landscapes, especially mountainous regions. Glacial ice provides much of the spectacular scenery in these regions. Features such as horns (sharp mountain peaks), Arêtes (sharp ridges), glacially formed lakes, and U-shaped valleys are all the results of glacial erosion. Wind is an important cause of erosion only in arid (dry) regions. Wind carries sand and dust, which can scour even solid rock. Many factors determine the rate and kind of erosion that occurs in a given area. The climate of an area determines the distribution, amount, and kind of precipitation that the area receives and thus the type and rate of weathering. An area with an arid climate erodes differently than an area with a humid climate. The elevation of an area also plays a role by determining the potential energy of running water. The higher the elevation the more energetic water will flow due to the force of gravity. The type of bedrock in an area (sandstone, granite, or shale) can determine the shapes of valleys and slopes, and the depth of streams.
A landscape’s geologic age-that is, how long current conditions of weathering and erosion have affected the area-determines its overall appearance. Younger landscapes tend to be more rugged and angular in appearance. Older landscapes have a tendency to have more rounded slopes and hills. The oldest landscapes tend to be low-lying with broad, open river valleys and low, rounded hills. The overall effect of the wearing down of an area is to level the land; the tendency is toward the reduction of all land surfaces to sea level.
Opposing this tendency toward a levelling is a force responsible for raising mountains and plateaus and for creating new landmasses. These changes to Earth’s surface occur in the outermost solid portion of Earth, known as the lithosphere. The lithosphere consists of the crust and another region known as the upper mantle and is approximately 65 to 100 km. (40 to 60 mi.) thick. Compared with the interior of the Earth, however, this region is moderately thin. The lithosphere is thinner in proportion to the whole Earth than the skin of an apple is to the whole apple.
Scientists believe that the lithosphere is broken into a series of plates, or segments. According to the theory of plate tectonics, these plates move around on Earth’s surface over long periods. Tectonics comes from the Greek word, tektonikos, which means ‘builder’.
According to the theory, the lithosphere is divided into large and small plates. The largest plates include the Pacific plate, the North American plate, the Eurasian plate, the Antarctic plate, the Indo-Australian plate, and the African plate. Smaller plates include the Cocos plate, the Nazca plate, the Philippine plate, and the Caribbean plate. Plate sizes vary a great deal. The Coco’s plate is 2,000 km (1,000 mi) wide, while the Pacific plate is nearly 14,000 km (nearly 9,000 mi) wide.
These plates move in three different ways in relation to each other. They pull apart or move away from each other, they collide or move against each other, or they slide past each other as they move sideways. The movement of these plates helps explain many geological events, such as earthquakes and volcanic eruptions and mountain building and the formation of the oceans and continents.
When the plates pull apart, two types of phenomena come about, depending on whether the movement takes place in the oceans or on land. When plates pull apart on land, deep valleys known as rift valleys form. An example of a rift valley is the Great Rift Valley that extends from Syria in the Middle East to Mozambique in Africa. When plates pull apart in the oceans, long, sinuous chains of volcanic mountains called mid-ocean ridges form, and new sea-floor is created at the site of these ridges. Rift valleys are also present along the crests of the mid-ocean ridges.
Most scientists believe that gravity and heat from the interior of the Earth cause the plates to move apart and to create new sea-floor. According to this explanation, molten rock known as magma rises from Earth’s interior to form hot spots beneath the ocean floor. As two oceanic plates pull apart from each other in the middle of the oceans, a crack, or rupture, appear and forms the mid-ocean ridges. These ridges exist in all the worlds’ ocean basins and resemble the seams of a baseball. The molten rock rises through these cracks and creates new sea-floor.
When plates collide or push against each other, regions called convergent plate margins form. Along these margins, one plate is usually forced to dive below the other. As that plate dives, it triggers the melting of the surrounding lithosphere and a region just below is known as the asthenosphere. These pockets of molten crust rise behind the margin through the overlying plate, creating curved chains of volcanoes known as arcs. This process is called Subduction.
If one plate consists of oceanic crust and the other consists of continental crust, the denser oceanic crust will dive below the continental crust. If both plates are oceanic crust, then either may be sub-ducted. If both are continental crust, Subduction can continue for a brief while but will eventually ends because continental crust is not dense enough to be forced very far into the upper mantle.
The results of this Subduction process are readily visible on a map showing that 80 percent of the world’s volcanoes rim the Pacific Ocean where plates are colliding against each other. The Subduction zone created by the collision of two oceanic plates-the Pacific plate and the Philippine plate-can also create a trench. Such a trench resulted in the formation of the deepest point on Earth, the Mariana Trench, which is estimated to be 11,033 m’s (36,198 ft) below sea level.
On the other hand, when two continental plates collide, mountain building occurs. The collision of the Indo-Australian plate with the Eurasian plate has produced the Himalayan Mountains. This collision resulted in the highest point of Earth, Mount Everest, which is 8,850 m’s (29,035 ft) above sea level.
Finally, some of Earth’s plates neither collide nor pull apart yet slips past each other. These regions are convened by the transforming margins. Few volcanoes occur in these areas because neither plate is forced down into Earth’s interior and little melting occurs. Earthquakes, however, are abundant as the two rigid plates slide past each other. The San Andreas Fault in California is a well-known example of a transformed margin.
The movement of plates occurs at a slow pace, at an average rate of only 2.5 cm (one in) per year. Still, over millions of years this gradual movement results in radical changes. Current plate movement is making the Pacific Ocean and Mediterranean Sea smaller, the Atlantic Ocean larger, and the Himalayan Mountains higher.
The interior of Earth plays an important role in plate tectonics. Scientists believe it is also responsible for Earth’s magnetic field. This field is vital to life because it shields the planet’s surface from harmful cosmic rays and from a steady stream of energetic particles from the Sun known as the solar wind.
Earth’s interior consists of the mantle and the core. The mantle and core make up by far the largest part of Earth’s mass. The distance from the base of the crust to the centre of the core is about 6,400 km (about 4,000 mi).
Scientists have learned about Earth’s interior by studying rocks that formed in the interior and rose to the surface. The study of meteorites, which are believed to be made of the same material that formed the Earth and its interior, has also offered clues about Earth’s interior. Finally, seismic waves generated by earthquakes send geophysicists information about the composition of the interior. The sudden movement of rocks during an earthquake causes vibrations that transmit energy through the Earth as waves. The way these waves proceed through the interior of Earth reveals the nature of materials inside the planet.
The mantle consists of three parts: the lower part of the lithosphere, the region below it known as the asthenosphere, and the region below the asthenosphere called the lower mantle. The entire mantle extends from the base of the crust to a depth of about 2,900 km (about 1,800 mi). Scientists believe the asthenosphere is made up of mushy plastic-like rock with pockets of molten rock. The term asthenosphere is derived from Greek and means ‘a weak layer’. The asthenosphere’s soft, plastic quality allows plates in the lithosphere above it to shift and slide on top of the asthenosphere. This shifting of the lithosphere’s plates is the source of most tectonic activity. The asthenosphere is also the source of the basaltic magma that makes up much of the oceanic crust and rises through volcanic vents on the ocean floor.
The mantle consists of mostly solid iron-magnesium silicate rock mixed with many other minor components including radioactive elements. However, even this solid rock can flow like a ‘sticky’ liquid when it is subjected to enough heat and pressure.
The core is divided into two parts, the outer core and the inner core. The outer core is about 2,260 km (about 1,404 mi) thick. The outer core is a liquid region composed mostly of iron, with smaller amounts of nickel and sulfur in liquid form. The inner core is about 1,220 km (about 758 mi) thick. The inner core is solid and is composed of iron, nickel, and sulfur in solid form. The inner core and the outer core also contain a small percentage of radioactive material. The existence of radioactive material is one source of heat in Earth’s interior because as radioactive material decays, it gives off heat. Temperatures in the inner core may be as high as 6650°C’s (12,000°F).
Scientists believe that Earth’s liquid iron core aids to make over a magnetic field that surrounds Earth and shields the planet from harmful cosmic rays and the Sun’s solar wind. The idea that Earth is like a giant magnet was first proposed in 1600 by English physician and natural philosopher William Gilbert. Gilbert proposed the idea to explain why the magnetized needle in a compass point north. According to Gilbert, Earth’s magnetic field creates a magnetic north pole and a magnetic south pole. The magnetic poles do not correspond to the geographic North and South poles, however. Moreover, the magnetic poles wander and are not always in the same place. The north magnetic pole is currently close to Ellef Ringnes Island in the Queen Elizabeth Islands near the boundary of Canada’s Northwest Territories with Nunavut. The magnetic south poles lies just off the coast of Wilkes Land, Antarctica.
Not only do the magnetic poles wander, but they also reverse their polarity-that is, the north magnetic pole becomes the south magnetic pole and vice versa. Magnetic reversals have occurred at least 170 times over the past 100 million years. The reversals occur on average about every 200,000 years and take place gradually over a period of several thousand years. Scientists still do not understand why these magnetic reversals occur but think they may be related to Earth’s rotation and changes in the flow of liquid iron in the outer core.
Some scientists theorize that the flow of liquid iron in the outer core sets up electrical currents that produce Earth’s magnetic field. Known as the dynamo theory, this theory may be the best explanation yet for the origin of the magnetic field. Earth’s magnetic field operates in a region above Earth’s surface known as the magnetosphere. The magnetosphere is shaped in some respects like a teardrop with a long tail that trails away from the Earth due to the force of the solar wind.
Inside the magnetosphere are the Van Allen’s radiation belts, named for the American physicist James A. Van Allen who discovered them in 1958. The Van Allen belts are regions where charged particles from the Sun and from cosmic rays are trapped and sent into spiral paths resembling Earth’s magnetic field. The radiation belts by that shield Earth’s surface from these highly energetic particles. Occasionally, however, due to extremely strong magnetic fields on the Sun’s surface, which are visible as sunspots, a brief burst of highly energetic particles streams along with the solar wind. Because Earth’s magnetic field lines converge and are closest to the surface at the poles, some of these energetic particles sneak through and interact with Earth’s atmosphere, creating the phenomenon known. Most scientists believe that the Earth, Sun, and all of the other planets and moons in the solar system took form of about 4.6 billion years. Originating endurably in some lengthily endurance from dust and giant gaseous particles-wave substances known as the solar nebula. The gas and dust in this solar nebula originated in a star that ended its life in an explosion known as a supernova. The solar nebula consisted principally of hydrogen, the lightest element, but the nebula was also seeded with a smaller percentage of heavier elements, such as carbon and oxygen. All of the chemical elements we know were originally made in the star that became a supernova. Our bodies are made of these same chemical elements. Therefore, all of the elements in our solar system, including all of the elements in our bodies, originally came from this star-seeded solar nebula.
Due to the force of gravity tiny clumps of gas and dust began to form in the early solar nebula. As these clumps came together and grew larger, they caused the solar nebula to contract in on itself. The contraction caused the cloud of gas and dust to flatten in the shape of a disc. As the clumps continued to contract, they became very dense and hot. Eventually the s of hydrogen became so dense that they began to fuse in the innermost part of the cloud, and these nuclear reactions gave birth to the Sun. The fusion of hydrogen s in the Sun is the source of its energy.
Many scientists favour the planetesimal theory for how the Earth and other planets formed out of this solar nebula. This theory helps explain why the inner planets became rocky while the outer planets, except Pluto, are made up mostly of gases. The theory also explains why all of the planets orbit the Sun in the same plane.
According to this theory, temperatures decreased with increasing distance from the centre of the solar nebula. In the inner region, where Mercury, Venus, Earth, and Mars formed, temperatures were low enough that certain heavier elements, such as iron and the other heavy compounds that make up rock, could condense of departing-that is, could change from a gas to a solid or liquid. Due to the force of gravity, small clumps of this rocky material eventually came with the dust in the original solar nebula to form protoplanets or planetesimals (small rocky bodies). These planetesimals collided, broke apart, and re-formed until they became the four inner rocky planets. The inner region, however, was still too hot for other light elements, such as hydrogen and helium, to be retained. These elements could only exist in the outermost part of the disc, where temperatures were lower. As a result two of the outer planets-Jupiter and Saturn-are by and large made of hydrogen and helium, which are also the dominant elements in the atmospheres of Uranus and Neptune.
Within the planetesimal Earth, heavier matter sank to the centre and lighter matter rose toward the surface. Most scientists believe that Earth was never truly molten and that this transfer of matter took place in the solid state. Much of the matter that went toward the centre contained radioactive material, an important source of Earth’s internal heat. As heavier material moved inward, lighter material moved outward, the planet became layered, and the layers of the core and mantle were formed. This process is called differentiation.
Not long after they formed, more than four billion years ago, the Earth and the Moon underwent a period when they were bombarded by meteorites, the rocky debris left over from the formation of the solar system. The impact craters created during this period of heavy bombardment are still visible on the Moon’s surface, which is unchanged. Earth’s craters, however, were long ago erased by weathering, erosion, and mountain building. Because the Moon has no atmosphere, its surface has not been subjected to weathering or erosion. Thus, the evidence of meteorite bombardment remains.
Energy released from the meteorite impacts created extremely high temperatures on Earth that melted the outer part of the planet and created the crust. By four billion years ago, both the oceanic and continental crust had formed, and the oldest rocks were created. These rocks are known as the Acasta Gneiss and are found in the Canadian territory of Nunavut. Due to the meteorite bombardment, the early Earth was too hot for liquid water to exist and so existing was impossible for life.
Geologists divide the history of the Earth into three eons: the Archaean Eon, which lasted from around four billion to 2.5 billion years ago; the Proterozoic Eon, which lasted from 2.5 billion to 543 million years ago; and the Phanerozoic Eon, which lasted from 543 million years ago to the present. Each eon is subdivided into different eras. For example, the Phanerozoic Eon includes the Paleozoic Era, the Mesozoic Era, and the Cenozoic Era. In turn, eras are further divided into periods. For example, the Paleozoic Era includes the Cambrian, Ordovician, Silurian, Devonian, Carboniferous, and Permian Periods.
The Archaean Eon is subdivided into four eras, the Eoarchean, the Paleoarchean, the Mesoarchean, and the Neoarchean. The beginning of the Archaean is generally dated as the age of the oldest terrestrial rocks, which are about four billion years old. The Archaean Eon came to an end 2.5 billion years ago when the Proterozoic Eon began. The Proterozoic Eon is subdivided into three eras: the Paleoproterozoic Era, the Mesoproterozoic Era, and the Neoproterozoic Era. The Proterozoic Eon lasted from 2.5 billion years ago to 543 million years ago when the Phanerozoic Eon began. The Phanerozoic Eon is subdivided into three eras: the Paleozoic Era from 543 million to 248 million years ago, the Mesozoic Era from 248 million to 65 million years ago, and the Cenozoic Era from 65 million years ago to the present.
Geologists base these divisions on the study and dating of rock layers or strata, including the fossilized remains of plants and animals found in those layers. Residing until the late 1800s scientists could only determine the relative ages of rock strata. They knew that overall the top layers of rock were the youngest and formed most recently, while deeper layers of rock were older. The field of stratigraphy shed much light on the relative ages of rock layers.
The study of fossils also enabled geologists to set the relative ages of different rock layers. The fossil record helped scientists determine how organisms evolved or when they became extinct. By studying rock layers around the world, geologists and paleontologists saw that the remains of certain animal and plant species occurred in the same layers, but were absent or altered in other layers. They soon developed a fossil index that also helped determine the relative ages of rock layers.
Beginning in the 1890s, scientists learned that radioactive elements in rock decay at a known rate. By studying this radioactive decay, they could detect an absolute age for rock layers. This type of dating, known as radiometric dating, confirmed the relative ages determined through stratigraphy and the fossil index and assigned absolute ages to the various strata. As a result scientists can assemble Earth’s geologic time scale from the Archaean Eon to the present.
The Precambrian is a time span that includes the Archaean and Proterozoic eons began roughly four billion years ago. The Precambrian marks the first formation of continents, the oceans, the atmosphere, and life. The Precambrian represents the oldest chapter in Earth’s history that can still be studied. Very little remains of Earth from the period of 4.6 billion to about four billion years ago due to the melting of rock caused by the early period of meteorite bombardment. Rocks dating from the Precambrian, however, have been found in Africa, Antarctica, Australia, Brazil, Canada, and Scandinavia. Some zircon mineral grains deposited in Australian rock layers have been dated to 4.2 billion years.
The Precambrian is also the longest chapter in Earth’s history, spanning a period of about 3.5 billion years. During this time frame, the atmosphere and the oceans formed from gases that escaped from the hot interior of the planet because of widespread volcanic eruptions. The early atmosphere consisted primarily of nitrogen, carbon dioxide, and water vapour. As Earth continued to cool, the water vapour condensed out and fell as precipitation to form the oceans. Some scientists believe that much of Earth’s water vapour originally came from comets containing frozen water that struck Earth during meteorite bombardment.
By studying 2-billion-year-old rocks found in northwestern Canada, as well as 2.5-billion-year-old rocks in China, scientists have found evidence that plate tectonics began shaping Earth’s surface as early as the middle Precambrian. About a billion years ago, the Earth’s plates were entered around the South Pole and formed a super-continent called Rodinia. Slowly, pieces of this super-continent broke away from the central continent and travelled north, forming smaller continents.
Life originated during the Precambrian. The earliest fossil evidence of life consists of Prokaryotes, one-celled organisms that lacked a nucleus and reproduced by dividing, a process known as asexual reproduction. Asexual division meant that a prokaryote’s hereditary material was copied unchanged. The first Prokaryotes were bacteria known as archaebacteria. Scientists believe they came into existence perhaps as early as 3.8 billion years ago, by 3.5 billion years ago, and where anaerobic—that is, they did not require oxygen to produce energy. Free oxygen barely existed in the atmosphere of the early Earth.
Archaebacteria were followed about 3.46 billion years ago by another type of prokaryote known as Cyanobacteria or blue
-green algae. These Cyanobacteria gradually introduced oxygen in the atmosphere because of photosynthesis. In shallow tropical waters, Cyanobacteria formed mats that grew into humps called stromatolites. Fossilized stromatolites have been found in rocks in the Pilbara region of western Australia that are more than 3.4 billion years old and in rocks of the Gunflint Chert region of northwest Lake Superior that are about 2.1 billion years old.
For billions of years, life existed only in the simple form of Prokaryotes. Prokaryotes were followed by the relatively more advanced eukaryotes, organisms that have a nucleus in their cells and that reproduces by combining or sharing their heredity makeup rather than by simply dividing. Sexual reproduction marked a milestone in life on Earth because it created the possibility of hereditary variation and enabled organisms to adapt more easily to a changing environment. The inordinate branch of Precambrian time occurred some 560 million to 545 million years ago and seeing an appearance of an intriguing group of fossil organisms known as the Ediacaran fauna. First discovered in the northern Flinders Range region of Australia in the mid-1940s and subsequently found in many locations throughout the world, these strange fossils may be the precursors of many fossil groups that were to explode in Earth's oceans in the Paleozoic Era.
At the start of the Paleozoic Era about 543 million years ago, an enormous expansion in the diversity and complexity of life occurred. This event took place in the Cambrian Period and is called the Cambrian explosion. Nothing like it has happened since. Most of the major groups of animals we know today made their first appearance during the Cambrian explosion. Most of the different ‘body plans’ found in animals today-that is, the way of an animal’s body is designed, with heads, legs, rear ends, claws, tentacles, or antennae-also originated during this period.
Fishes first appeared during the Paleozoic Era, and multicellular plants began growing on the land. Other land animals, such as scorpions, insects, and amphibians, also originated during this time. Just as new forms of life were being created, however, other forms of life were going out of existence. Natural selection meant that some species can flourish, while others failed. In fact, mass extinctions of animal and plant species were commonplace.
Most of the early complex life forms of the Cambrian explosion lived in the sea. The creation of warm, shallow seas, along with the buildup of oxygen in the atmosphere, may have aided this explosion of life forms. The shallow seas were created by the breakup of the super-continent Rodinia. During the Ordovician, Silurian, and Devonian periods, which followed the Cambrian Period and lasted from 490 million to 354 million years ago, some continental pieces that had broken off Rodinia collided. These collisions resulted in larger continental masses in equatorial regions and in the Northern Hemisphere. The collisions built several mountain ranges, including parts of the Appalachian Mountains in North America and the Caledonian Mountains of northern Europe.
Toward the close of the Paleozoic Era, two large continental masses, Gondwanaland to the south and Laurasia to the north, faced each other across the equator. Their slow but eventful collision during the Permian Period of the Paleozoic Era, which lasted from 290 million to 248 million years ago, assembled the super-continent Pangaea and resulted in some grandest mountains in the history of Earth. These mountains included other parts of the Appalachians and the Ural Mountains of Asia. At the close of the Paleozoic Era, Pangaea represented more than 90 percent of all the continental landmasses. Pangaea straddled the equator with a huge mouth-like opening that faced east. This opening was the Tethys Ocean, which closed as India moved northward creating the Himalayas. The last remnants of the Tethys Ocean can be seen in today’s Mediterranean Sea.
The Paleozoic ended with a major extinction event, when perhaps as many as 90 percent of all plant and animal species died out. The reason is not known for sure, but many scientists believe that huge volcanic outpourings of lavas in central Siberia, coupled with an asteroid impact, were joint contributing factors.
The Mesozoic Era, beginning 248 million years ago, is often characterized as the Age of Reptiles because reptiles were the dominant life forms during this era. Reptiles dominated not only on land, as dinosaurs, but also in the sea, as the plesiosaurs and ichthyosaurs, and in the air, as pterosaurs, which were flying reptiles.
The Mesozoic Era is divided into three geological periods: the Triassic, which lasted from 248 million to 206 million years ago; the Jurassic, from 206 million to 144 million years ago; and the Cretaceous, from 144 million to 65 million years ago. The dinosaurs emerged during the Triassic Period and was one of the most successful animals in Earth’s history, lasting for about 180 million years before going extinct at the end of the Cretaceous Period. The first birds and mammals and the first flowering plants also appeared during the Mesozoic Era. Before flowering plants emerged, plants with seed-bearing cones known as conifers were the dominant form of plants. Flowering plants soon replaced conifers as the dominant form of vegetation during the Mesozoic Era.
The Mesozoic was an eventful era geologically with many changes to Earth’s surface. Pangaea continued to exist for another 50 million years during the early Mesozoic Era. By the early Jurassic Period, Pangaea began to break up. What is now South America begun splitting from what is now Africa, and in the process the South Atlantic Ocean formed? As the landmass that became North America drifted away from Pangaea and moved westward, a long Subduction zone extended along North America’s western margin. This Subduction zone and the accompanying arc of volcanoes extended from what is now Alaska to the southern tip of South America. A great deal of this featured characteristic is called the American Cordillera, and exists today as the eastern margin of the Pacific Ring of Fire.
During the Cretaceous Period, heat continued to be released from the margins of the drifting continents, and as they slowly sank, vast inland seas formed in much of the continental interiors. The fossilized remains of fishes and marine mollusks called ammonites can be found today in the middle of the North American continent because these areas were once underwater. Large continental masses broke off the northern part of southern Gondwanaland during this period and began to narrow the Tethys Ocean. The largest of these continental masses, present-day India, moved northward toward its collision with southern Asia. As both the North Atlantic Ocean and South Atlantic Ocean continued to open, North and South America became isolated continents for the first time in 450 million years. Their westward journey resulted in mountains along their western margins, including the Andes of South America.
The Cenozoic Era, beginning about 65 million years ago, is the period when mammals became the dominant form of life on land. Human beings first appeared in the later stages of the Cenozoic Era. In short, the modern world as we know it, with its characteristic geographical features and its animals and plants, came into being. All of the continents that we know today took shape during this era.
A single catastrophic event may have been responsible for this relatively abrupt change from the Age of Reptiles to the Age of Mammals. Most scientists now believe that a huge asteroid or comet struck the Earth at the end of the Mesozoic and the beginning of the Cenozoic eras, causing the extinction of many forms of life, including the dinosaurs. Evidence of this collision came with the discovery of a large impact crater off the coast of Mexico’s Yucatán Peninsula and the worldwide finding of iridium, a metallic element rare on Earth but abundant in meteorites, in rock layers dated from the end of the Cretaceous Period. The extinction of the dinosaurs opened the way for mammals to become the dominant land animals.
The Cenozoic Era is divided into the Tertiary and the Quaternary periods. The Tertiary Period lasted from about 65 million to about 1.8 million years ago. The Quaternary Period began about 1.8 million years ago and continued to the present day. These periods are further subdivided into epochs, such as the Pleistocene, from 1.8 million to 10,000 years ago, and the Holocene, from 10,000 years ago to the present.
Early in the Tertiary Period, Pangaea was completely disassembled, and the modern continents were all clearly outlined. India and other continental masses began colliding with southern Asia to form the Himalayas. Africa and a series of smaller micro-continents began colliding with southern Europe to form the Alps. The Tethys Ocean was nearly closed and began to resemble today’s Mediterranean Sea. As the Tethys continued to narrow, the Atlantic continued to open, becoming an ever-wider ocean. Iceland appeared as a new island in later Tertiary time, and its active volcanism today shows that sea-floor spreading is still causing the country to grow.
Late in the Tertiary Period, about six million years ago, humans began to evolve in Africa. These early humans began to migrate to other parts of the world between two million and 1.7 million years ago.
The Quaternary Period marks the onset of the great ice ages. Many times, perhaps at least once every 100,000 years on average, vast glaciers 3 km (2 mi) thick invaded much of North America, Europe, and parts of Asia. The glaciers eroded considerable amounts of material that stood in their paths, gouging out U-shaped valleys. Anically modern human beings, known as Homo sapiens, became the dominant form of life in the Quaternary Period. Most anthropologists (scientists who study human life and culture) believe that Anically modern humans originated only recently in Earth’s 4.6-billion-year history, within the past 200,000 years.
With the rise of human civilization about 8,000 years ago and especially since the Industrial Revolution in the mid-1700s, human beings began to alter the surface, water, and atmosphere of Earth. In doing so, they have become active geological agents, not unlike other forces of change that influence the planet. As a result, Earth’s immediate future depends largely on the behaviour of humans. For example, the widespread use of fossil fuels is releasing carbon dioxide and other greenhouse gases into the atmosphere and threatens to warm the planet’s surface. This global warming could melt glaciers and the polar ice caps, which could flood coastlines around the world and many island nations. In effect, the carbon dioxide removed from Earth’s early atmosphere by the oceans and by primitive plant and animal life, and subsequently buried as fossilized remains in sedimentary rock, is being released back into the atmosphere and is threatening the existence of living things.
Even without human intervention, Earth will continue to change because it is geologically active. Many scientists believe that some of these changes can be predicted. For example, based on studies of the rate that the sea-floor is spreading in the Red Sea, some geologists predict that in 200 million years the Red Sea will be the same size as the Atlantic Ocean is today. Other scientists predict that the continent of Asia will break apart millions of years from now, and as it does, Lake Baikal in Siberia will become a vast ocean, separating two landmasses that once made up the Asian continent.
In the far, far distant future, however, scientists believe that Earth will become an uninhabitable planet, scorched by the Sun. Knowing the rate at which nuclear fusion occurs in the Sun and knowing the Sun’s mass, astrophysicists (scientists who study stars) have calculated that the Sun will become brighter and hotter about three billion years from now, when it will be hot enough to boil Earth’s oceans away. Based on studies of how other Sun-like stars have evolved, scientists predict that the Sun will become a red giant, a star with a very large, hot atmosphere, about seven billion years from now. As a red giant the Sun’s outer atmosphere will expand until it engulfs the planet Mercury. The Sun will then be 2,000 times brighter than it is now and so hot it will melt Earth’s rocks. Earth will end its existence as a burnt cinder.
Three billion years is the life span of millions of human generations, however. Perhaps by then, humans will have learned how to journey through and beyond the solar system and begin to colonize other planets in our galaxy, and find yet of another place to call ‘home’.
The Cenozoic era (65 million years ago to the present time) is divided into the Tertiary period (65 million to 1.6 million years ago) and the Quaternary period (1.6 million years ago to the present). However, because scientists have so much more information about this era, they tend to focus on the epochs that make up each period. During the first part of the Cenozoic era, an abrupt transition from the Age of Reptiles to the Age of Mammals occurred, when the large dinosaurs and other reptiles that had dominated the life of the Mesozoic era disappeared
Index fossils of the Cenozoic tend to be microscopic, such as the tiny shells of foraminifera. They are commonly used, along with varieties of pollen fossils, to date the different rock strata of the Cenozoic era.
The Paleocene epoch (65 million to 55 million years ago) marks the beginning of the Cenozoic era. Seven groups of Paleocene mammals are known. All of them appear to have developed in northern Asia and to have migrated to other parts of the world. These primitive mammals had many features in common. They were small, with no species exceeding the size of a small modern bear. They were four-footed, with five toes on each foot, and they walked on the soles of their feet. Most of them had slim heads with narrow muzzles and small brain cavities. The predominant mammals of the period were members of three groups that are now extinct. They were the creodonts, which were the ancestors of modern carnivores; the amblypods, which were small, heavy-bodied animals; and the condylarths, which were light-bodied herbivorous animals with small brains. The Paleocene groups that have survived are the marsupials, the insectivores, the primates, and the rodents
During the Eocene epoch (55 million to 38 million years ago), most direct evolutionary ancestors of modern animals appeared. Among these animals-all of which were small in stature-were the horse, rhinoceros, camel, rodent, and monkey. The creodonts and amblypods continued to develop during the epoch, but the condylarths became extinct before it ended. The first aquatic mammals, ancestors of modern whales, also appeared in Eocene times, as did such modern birds as eagles, pelicans, quail, and vultures. Changes in vegetation during the Eocene epoch were limited chiefly to the migration of types of plants in response to climate changes.
During the Oligocene epoch (38 million to 24 million years ago), most of the archaic mammals from earlier epochs of the Cenozoic era disappeared. In their place appeared representatives of many of modern mammalian groups. The creodonts became extinct, and the first true carnivores, resembling dogs and cats, evolved. The first anthropoid apes also lived during this time, but they became extinct in North America by the end of the epoch. Two groups of animals that are now extinct flourished during the Oligocene epoch: the titanotheres, which are related to the rhinoceros and the horse; and the oreodonts, which were small, dog-like, grazing animals.
The development of mammals during the Miocene epoch (24 million to five million years ago) was influenced by an important evolutionary development in the plant kingdom: the first appearance of grasses. These plants, which were ideally suited for forage, encouraged the growth and development of grazing animals such as horses, camels, and rhinoceroses, which were abundant during the epoch. During the Miocene epoch, the mastodon evolved, and in Europe and Asia a gorilla-like ape, Dryopithecus, was common. Various types of carnivores, including cats and wolflike dogs, ranged over many parts of the world.
The paleontology of the Pliocene epoch (five million to 1.6 million years ago) does not differ much from that of the Miocene, although the period is regarded by many zoologists as the climax of the Age of Mammals. The Pleistocene Epoch (1.6 million to 10,000 years ago) in both Europe and North America was marked by an abundance of large mammals, most of which were basically modern in type. Among them were buffalo, elephants, mammoths, and mastodons. Mammoths and mastodons became extinct before the end of the epoch. In Europe, antelope, lions, and hippopotamuses also appeared. Carnivores included badgers, foxes, lynx, otters, pumas, and skunks, as well as now-extinct species such as the giant saber-toothed tiger. In North America, the first bears made their appearance as migrants from Asia. The armadillo and ground sloth migrated from South America to North America, and the musk-ox ranged southward from the Arctic regions. Modern human beings also emerged during this epoch.
The Cenozoic Era, beginning about 65 million years ago, is the period when mammals became the dominant form of life on land. Human beings first appeared in the later stages of the Cenozoic Era. In short, the modern world as we know it, with its characteristic geographical features and its animals and plants, came into being. All of the continents that we know today took shape during this era.
A single catastrophic event may have been responsible for this relatively abrupt change from the Age of Reptiles to the Age of Mammals. Most scientists now believe that a huge asteroid or comet struck the Earth at the end of the Mesozoic and the beginning of the Cenozoic eras, causing the extinction of many forms of life, including the dinosaurs. Evidence of this collision came with the discovery of a large impact crater off the coast of Mexico’s Yucatán Peninsula and the worldwide finding of iridium, a metallic element rare on Earth but abundant in meteorites, in rock layers dated from the end of the Cretaceous Period. The extinction of the dinosaurs opened the way for mammals to become the dominant land animals.
The Cenozoic Era is divided into the Tertiary and the Quaternary periods. The Tertiary Period lasted from about 65 million to about 1.8 million years ago. The Quaternary Period began about 1.8 million years ago and continued to the present day. These periods are further subdivided into epochs, such as the Pleistocene, from 1.8 million to 10,000 years ago, and the Holocene, from 10,000 years ago to the present.
Early in the Tertiary Period, Pangaea was completely disassembled, and the modern continents were all clearly outlined. India and other continental masses began colliding with southern Asia to form the Himalayas. Africa and a series of smaller micro-continents began colliding with southern Europe to form the Alps. The Tethys Ocean was nearly closed and began to resemble today’s Mediterranean Sea. As the Tethys continued to narrow, the Atlantic continued to open, becoming an ever-wider ocean. Iceland appeared as a new island in later Tertiary time, and its active volcanism today suggests that sea-floor spreading be still causing the country to grow.
Late in the Tertiary Period, about six million years ago, humans began to evolve in Africa. These early humans began to migrate to other parts of the world between two or 1.7 million years ago.
The Quaternary Period marks the onset of the great ice ages. Many times, perhaps at least once every 100,000 years on average, vast glaciers 3 km (2 mi) thick invaded much of North America, Europe, and parts of Asia. The glaciers eroded considerable amounts of material that stood in their paths, gouging out U-shaped valleys. Anically modern human beings, known as Homo sapiens, became the dominant form of life in the Quaternary Period. Most anthropologists (scientists who study human life and culture) believe that Anically modern humans originated only recently in Earth’s 4.6-billion-year history, within the past 200,000 years.
Most biologists agree that animals evolved from simpler single-celled organisms. Exactly how this happened is unclear, because few fossils have been left to record the sequence of events. Faced with this lack of fossil evidence, researchers have attempted to piece together animal origins by examining the single-celled organisms alive today.
Modern single-celled organisms are classified into two kingdoms: the Prokaryotes and protists. Prokaryotes, which include bacteria, are very simple organisms, and lack many features seen in animal cells. Protists, on the other hand, are more complex, and their cells contain all the specialized structures, or organelles, found in the cells of animals. One protist group, the choanoflagellates or collar flagellates, contains organisms that bear a striking resemblance to cells that are found in sponges. Most choanoflagellates live on their own, but significantly, some form permanent groups or colonies.
This tendency to form colonies are widely believed to have been an important stepping stone on the path to animal life. The next step in evolution would have involved a transition from colonies of independent cells to colonies containing specialized cells that were dependent on each other for survival. Once this development had occurred, such colonies would have effectively become single organisms. Increasing specialization among groups of cells could then have created tissues, triggering the long and complex evolution of animal bodies.
This conjectural sequence of events probably occurred along several parallel paths. One path led to the sponges, which retain a collection of primitive features that set them apart from all animals. Another path led to two major subdivisions of the animal kingdom: the Protostomes, which include arthropods, annelid worms, mollusks, and cnidarians; and the deuterostomes, which include echinoderms and chordates. Protostomes and deuterostomes differ fundamentally in the way they develop as embryos, strongly suggesting that they split from each other a long time ago.
Animal life first appeared perhaps a billion years ago, but for a long time after this, the fossil record remains almost blank. Fossils exist that seem to show burrows and other indirect evidence for animal life, but the first direct evidence of animals themselves appears about 650 million years ago, toward the end of the Precambrian period. At this time, the animal kingdom stood on the threshold of a great explosion in diversity. By the end of the Cambrian Period, 150 million years later, all of the main types of animal life existing today had become established.
When the first animals evolved, dry land was probably without any kind of life, except possibly bacteria. Without terrestrial plants, land-based animals would have had nothing to eat. Nevertheless, when plants took up life on land more than 400 million years ago, that situation changed, and animals evolved that could use this new source of food. The first land animals included primitive wingless insects and probably a range of soft-bodied invertebrates that have not left fossil remains. The first vertebrates to move onto land were the amphibians, which appeared about 370 million years ago.
For all animals, life on land involved meeting some major challenges. Foremost among these was the need to conserve water and the need to extract oxygen from the air. Another problem concerned the effects of gravity. Water buoys of living things, but air, which is 750 times less dense than water, generates almost no buoyancy at all. To function effectively on land, animals needed support.
In soft-bodied land animals such as earthworms, this support is provided by a hydrostatic skeleton, which works by internal pressure. The animal's body fluids press out against its skin, giving the animal its shape. In insects and other arthropods, support is provided by the exoskeleton (external skeletons), while in vertebrates it is provided by bones. Exoskeletons can play a double role by helping animals to conserve water, but they have one important disadvantage: unlike an internal bony skeleton, their weight increases very rapidly as they get bigger, eventually making them too heavy to move. This explains why insects have all remained relatively small, while some vertebrates have reached very large sizes.
Like other living things, animals evolve by adapting to and exploiting their surroundings. In the billion-year history of animal life, this process could use resources in a different way. Some of these species are surviving today, but these are a minority; an even greater number are extinct, having lost the struggle for survival
Speciation, the birth of new species, usually occurs when a group of living things becomes isolated from others of their kind. Once this has occurred, the members of the group follow their own evolutionary path and adapt in ways that make them increasingly distinct. After a long period-typically thousand of the years-unique features were to mean that they can no longer breed within the former circle of relative relations. At this point, a new species comes into being.
In animals, this isolation can come about in several different ways. The simplest form, geographical isolation, occurs when members of an original species become separated by a physical barrier. One example of such a barrier is the open sea, which isolates animals that have been accidentally stranded on remote islands. As the new arrivals adapt to their adopted home, they become ever more distinct from their mainland relatives. Sometimes the result is a burst of adaptive radiation, which produces several different species. In the Hawaiian Islands, for example, 22 species of honey-creepers have evolved from a single pioneering species of a finch-like bird.
Another type of isolation is thought to occur where there is no physical separation. Here, differences in behaviour, such as mate selection, may sometimes help to split a single species into distinct groups. If the differences persist for a some duration, in that they live long enough new species are created.
The fate of a new species depends very much on the environment in which it evolved. If the environment is stable and no new competitors appear on the scene, an animal species may change very little in hundreds of thousands of years. Nevertheless, if the environment changes rapidly and competitors arrive from outside, the struggle for survival is much more intense. In these conditions, either a species change, or it eventually becomes extinct.
During the history of animal life, on at least five occasions, sudden environmental change has triggered simultaneous extinction on a massive scale. One of these mass extinctions occurred at the end of the Cretaceous Period, about 65 million years ago, killing all dinosaurs and perhaps two-thirds of marine species. An even greater mass extinction took place at the end of the Permian Period, about 200 million years ago. Many biologists believe that we are at present living in a sixth period of mass extinction, this time triggered by human beings.
Compared with plants, animals make up only a small part of the total mass of living matter on earth. Despite this, they play an important part in shaping and maintaining natural environments.
Many habitats are directly influenced by the way animals live. Grasslands, for example, exist partly because grasses and grazing animals have evolved a close partnership, which prevents other plants from taking hold. Tropical forests also owe their existence to animals, because most of their trees rely on animals to distribute their pollen and seeds. Soil is partly the result of animal activity, because earthworms and other invertebrates help to break down dead remains and recycle the nutrients that they contain. Without its animal life, the soil would soon become compacted and infertile.
By preying on each other, animals also help to keep their own numbers in check. This prevents abrupt population peaks and crashes and helps to give living systems a built-in stability. On a global scale, animals also influence some of the nutrient cycles on which almost all life depends. They distribute essential mineral elements in their waste, and they help to replenish the atmosphere's carbon dioxide when they breathe. This carbon dioxide is then used by plants as they grow.
Until relatively recently in human history, people existed as nomadic hunter-gatherers. They used animals primarily as a source of food and for raw materials that could be used for making tools and clothes. By today's standards, hunter-gatherers were equipped with rudimentary weapons, but they still had a major impact on the numbers of some species. Many scientists believe, for example, that humans were involved in a cluster of extinctions that occurred about 12,000 years ago in North America. In less than a millennium, two-thirds of the continent's large mammal species disappeared.
This simple relationship between people and animals changed with domestication, which also began about 12,000 years ago. Instead of being actively hunted, domesticated animals were slowly brought under human control. Some were kept for food or for clothing, others for muscle power, and some simply for companionship.
The first animal to be domesticated was almost certainly the dog, which was bred from wolves. It was followed by species such as the cat, horse, camel, llama, and aurochs (a species of wild cattle), and by the Asian jungle fowl, which is the ancestor of today's chickens. Through selective breeding, each of these animals has been turned into forms that are particularly suitable for human use. Today, many domesticated animals, including chickens, vastly outnumber their wild counterparts. Sometimes, such as the horse, the original wild species has died out together.
Over the centuries, many domesticated animals have been introduced into different parts of the world only to escape and establish themselves in the wild. With stowaway pests such as rats, these ‘feral’ animals have often affected native wildlife. Cats, for example, have inflicted great damage on Australia's smaller marsupials, and feral pigs and goats continue to be serious problems for the native wildlife of the Galápagos Islands.
Despite the growth of domestication, humans continue to hunt some wild animals. Some forms of hunting are carried out mainly for sport, but others provide food or animal products. Until recently, one of the most significant of these forms of hunting was whaling, which reduced many whale stocks to the brink of extinction. Today, highly efficient sea fishing threatens some species of fish with the same fate since the beginning of agriculture. The human population has increased by more than two thousand times. To provide the land needed for growing food and housing people, large areas of the earth's landscapes have been completely transformed. Forests have been cut down, wetlands drained, and deserts irrigated, reducing these natural habitats to a fraction of their former extent.
Some species of animals have managed to adapt to these changes. A few, such as the brown rat, raccoon, and house sparrow, have benefited by exploiting the new opportunities that have opened and have successfully taken up life on farms, or in towns and cities. Nonetheless, most animals have specialized ways of life that make them dependent on a particular kind of habitat. With the destruction of their habitats, their number inevitably declines.
In the 20th century, animals have also had to face additional threats from human activities. Foremost among these are environmental pollution and the increasing demand for resources such as timber and fresh water. For some animals, the combination of these changes has proved so damaging that their numbers are now below the level needed to guarantee survival.
Across the world, efforts are currently underway to address this urgent problem. In the most extreme cases, gravely threatened animals can be helped by taking them into captivity and then releasing them once breeding programs have increased their number. One species saved in this way is the Hawaiian mountain goose or nē? nē? . In 1951, its population had been reduced to just 33. Captive breeding has since increased the population to more than 2500, removing the immediate threat of extinction.
While captive breeding is a useful emergency measure, it cannot assure the long-term survival of a species. Today animal protection focuses primarily on the preservation of entire habitats, an approach that maintains the necessary links between the different species the habitats support. With the continued growth in the world's human population, habitat preservation will require a sustained reduction in our use of the world's resources to minimize our impact on the natural world.
Paleontologists gain most of their information by studying deposits of sedimentary rocks that formed in strata over millions of years. Most fossils are found in sedimentary rock. Paleontologists use fossils and other qualities of the rock to compare strata around the world. By comparing, they can determine whether strata developed during the same time or in the same type of environment. This helps them assemble a general picture of how the earth evolved. The study and comparison of different strata are called stratigraphy.
Fossils provide for most of the data on which strata are compared. Some fossils, called index fossils, are especially useful because they have a broad geographic range but a narrow temporal one-that is, they represent a species that was widespread but existed for a brief period of time. The best index fossils tend to be marine creatures. These animals evolved rapidly and spread over large areas of the world. Paleontologists divide the last 570 million years of the earth's history into eras, periods, and epochs. The part of the earth's history before about 570 million years ago is called Precambrian time, which began with the earth's birth, probably more than four billion years ago.
The earliest evidence of life consists of microscopic fossils of bacteria that lived as early as 3.6 billion years ago. Most Precambrian fossils are very tiny. Most species of larger animals that lived in later Precambrian time had soft bodies, without shells or other hard body parts that would create lasting fossils. The first abundant fossils of larger animals date from about 600 million years ago.
At first glance, the sudden jump from 8000 Bc to 10,000 years ago looks peculiar. On reflection, however, the time-line has clearly not lost 2,000 years. Rather, the time-line has merely shifted from one convention of measuring time to another. To understand the reasons for this shift, it will help to understand some of the different conventions used to measure time.
All human societies have faced the need to measure time. Today, for most practical purposes, we keep track of time with the aid of calendars, which are widely and readily available in printed and computerized forms throughout the world. However, long before humans developed any formal calendar, they measured time based on natural cycles: the seasons of the year, the waxing and waning of the moon, the rising and setting of the sun. Understanding these rhythms of nature was necessary for humans so they could be successful in hunting animals, catching fish, and collecting edible nuts, berries, roots, and vegetable matter. The availability of these animals and plants varied with the seasons, so early humans needed at least a practical working knowledge of the seasons to eat. When humans eventually developed agricultural societies, it became crucial for farmers to know when to plant their seeds and harvest their crops. To ensure that farmers had access to reliable knowledge of the seasons, early agricultural societies in Mesopotamia, Egypt, China, and other lands supported specialists who kept track of the seasons and created the world’s first calendars. The earliest surviving calendars date from around 2400 Bc.
As societies became more complex, they required increasingly precise ways to measure and record increments of time. For example, some of the earliest written documents recorded tax payments and sales transactions, and indicating when they took place was important. Otherwise, anyone reviewing the documents later would find it impossible to determine the status of an individual account. Without any general convention for measuring time, scribes (persons who wrote documents) often dated events by the reigns of local rulers. In other words, a scribe might indicate that an individual’s tax payment arrived in the third year of the reign (or third regnal years) of the Assyrian ruler Tiglath-Pileser. By consulting and comparing such records, authorities could determine if the individual were up to date in tax payments.
These days, scholars and the public alike refer to time on many different levels, and they consider events and processes that took place at any times, from the big bang to the present. Meaningful discussion of the past depends on some generally observed frames of reference that organize time coherently and allow us to understand the chronological relationships between historical events and processes.
For contemporary events, the most common frame of reference is the Gregorian calendar, which organizes time around the supposed birth date of Jesus of Nazareth. This calendar refers to dates before Jesus’ birth as Bc (‘before Christ’) and those afterwards as ad (anno Domini, Latin for ‘in the year of the Lord’). Scholars now believe that Jesus was born four to six years before the year recognized as ad one in the Gregorian calendar, so this division of time is probably off its intended mark by a few years. Nonetheless, even overlooking this point, the Gregorian calendar is not meaningful or useful for references to events in the so-called deep past, a period so long ago that to be very precise about dates is impossible. Saying that the big bang took place in the year 15,000,000,000 Bc would be misleading, for example. No one knows exactly when the big bang took place, and even if someone did, there would be little point in dating that moment and everything that followed from it according to an event that took place some 14,999,998,000 years later. For purposes of dating events and processes in the deep past and remote prehistory, then, scientists and historians have adopted different principles of measuring time.
In conventional usage, prehistory refers to the period before humans developed systems for writing, while the historical era refers to the period after written documents became available. This usage became common in the 19th century, when professional historians began to base their studies of the past largely on written documentation. Historians regarded written source materials as more reliable than the artistic and artifactual evidence studied by archaeologists working on prehistoric times. Recently, however, the distinction between prehistory and the historical era has become much more blurred than it was in the 19th century. Archaeologists have unearthed rich collections of artifacts that throw considerable light on so-called prehistoric societies. When, contemporary historians realize much better than did their predecessors that written documentary evidence raises as many questions as it does answers. In any case, written documents illuminate only selected dimensions of experience. Despite these nuances of historical scholarship, for purposes of dating events and processes in times past, the distinction between the term’s prehistory and the historical era remains useful. For the deep past and prehistory, establishing precise dates is rarely possible: Only in the cases of a few natural and celestial phenomena, such as eclipses and appearances of comets, are scientists able to infer relatively precise dates. For the historical era, on the other hand, precise dates can be established for many events and processes, although certainly not for all.
Since the Gregorian calendar is not especially useful for dating events in the distant period long before the historical era, many scientists who study the deep past refer not to years ‘Bc’ or AD’ but to years ‘before the present’. Astronomers and physicists, for example, believe the big bang took place between 10 billion and 20 billion years ago, and that planet Earth came into being about 4.65 billion years ago. When dealing with Earth’s physical history and life forms, geologists often dispense with year references together and divide time into alternate spans of time. These time spans are conventionally called eons (the longest span), eras, periods, and epochs (the shortest span). Since obtaining precise dates for distant times is impossible, they simply refer to the Proterozoic Eon (2.5 billion to 570 million years ago), the Mesozoic Era (240 million to 65 million years ago), the Jurassic Period (205 million to 138 million years ago), or the Pleistocene Epoch
(1.6 million to 10,000 years ago).
Because the Pleistocene Epoch is a comparatively recent time span, archaeologists and pre historians are frequently able to assign at least approximate year dates to artifacts from that period. As with all dates in the distant past, however, it would be misleading to follow the principles of the Gregorian calendar and refer to dates’ Bc. As a result, archaeologists and pre-historians often call these dates’ bp (‘before the present’), with the understanding that all dates bp are approximate. Thus, scholars date the evolution of The Homo sapiens to about 130,000 bp and the famous cave paintings at Lascaux in southern France to about 15,000 Bc.
The Dynamic Timeline, of which all date before 8000 Bc refers to dates before the present, and all dates since 8000 Bc categorizes time according to the Gregorian calendar. Thus, a backward scroll in the time-line will take users from 7700 Bc to 7800 Bc, 7900 Bc, and 8000 Bc to 10,000 years ago. Note that the time-line has not lost 2,000 years! To date events this far back in time, the Dynamic Timeline has simply switched to a different convention of designating the dates of historical events.
Written documentation enables historians to establish relatively precise dates of events in the historical era. However, placing these events in chronological order requires some agreed upon starting points for a frame of reference. For purposes of maintaining proper tax accounts in a Mesopotamian city-state, dating an event in relation to the first year of a king’s reign might be sufficient. For purposes of understanding the development of entire peoples or societies or regions, however, a collection of dates according to the regnal years of many different local rulers would quickly become confusing. Within a given region there might be many different local rulers, so efforts to establish the chronological relationship between events may entail an extremely tedious collation of all the rulers’ regnal years. Thus, to facilitate the understanding of chronological relationships between events in different jurisdictions, some larger frame of reference is necessary. Most commonly these larger frames of reference take the form of calendars, which not only make it possible to predict changes in the seasons but also enable users to organize their understanding of time and appreciate the relationships between datable events.
Different civilizations have devised thousands of different calendars. Of the 40 or so calendars employed in the world today, the most widely used is the Gregorian calendar, introduced in 1582 by Pope Gregory XIII. The Gregorian calendar revised the Julian calendar, instituted by Julius Caesar in 45 Bc, to bring it closer in line with the seasons. Most Roman Catholic lands accepted the Gregorian calendar upon its promulgation by Gregory in 1582, but other lands adopted it much later: Britain in 1752, Russia in 1918, and Greece in 1923. During the 20th century it became the dominant calendar throughout the world, especially for purposes of international business and diplomacy.
Despite the prominence of the Gregorian calendar in the modern world, millions of people use other calendars as well. The oldest calendar still in use is the Jewish calendar, which dates’ time from the creation of the world in the (Gregorian) year 3761 Bc, according to the Hebrew scriptures. The year 2000Bc. in the Gregorian calendar thus corresponding to the year am 5761 in the Jewish calendar (am stands for anno Mundi, Latin for ‘the year of the world’). The Jewish calendar is the official calendar of Israel, and it also serves as a religious calendar for Jews worldwide.
The Chinese use another calendar, which, as tradition holds, takes its point of departure in the year 2697 Bc in honour of a beneficent ruler’s work. The year AD 2000 of the Gregorian calendar, and with that it corresponds to the year 4697 in the Chinese calendar. The Maya calendar began even earlier than the Chinese-August 11, 3114 Bc. Maya scribes calculated that this is when the cycle of time began. The Maya actually used two interlocking calendars-one a 365-day calendar based on the cycles of the sun, the other a sacred almanac used to calculate auspicious or unlucky days. Despite the importance of these calendars to the Maya civilization, the calendars passed out of general use after the Spanish conquest of Mexico in the 16th century AD.
The youngest calendar in widespread use today is the Islamic lunar calendar, which begins the day after the Hegira, Muhammad’s migration from Mecca to Medina in ad 622. The Islamic calendar is the official calendar in many Muslim lands, and it governs religious observances for Muslims worldwide. Since it reckons time according too lunar rather than solar cycles, the Islamic calendar does not neatly correspond to the Gregorian and other solar calendars. For example, although there were 1,378 solar years between Muhammad’s Hegira and AD 2000, that year corresponds to the year 1420 in the Islamic calendar. Like the Gregorian calendar and despite their many differences, the Jewish, Chinese, and Islamic calendars all make it possible to place individual datable events in proper chronological order.
Recently, controversies have arisen concerning the Gregorian calendar’s designation of Bc and ad to indicate years before and after the birth of Jesus Christ. This practice originated in the 6th century ad with a Christian monk named Dionysius Exiguus. Like other devout Christians, Dionysius regarded the birth of Jesus as the singular turning point of history. Accordingly, he introduced a system that referred to events in time based on the number of years they occurred before or after Jesus’ birth. The system caught on very slowly. Saint Bede the Venerable, a prominent English monk and historian, employed the system in his own works in the 8th century ad, but the system came into general use only about AD 1400. (Until then, Christians generally calculated time according to regnal years of prominent rulers.) When Pope Gregory XIII ordered the preparation of a new calendar in the 16th century, he intended it to serve as a religious calendar as well as a tool for predicting seasonal changes. As leader of the Roman Catholic Church, Pope Gregory considered it proper to continue recognizing Jesus’ birth as the turning point of history.
As lands throughout the world adopted the Gregorian calendar, however, the specifically Christian implications of the term’s Bc and ad did not seem appropriate for use by non-Christians. Really, they did not even seem appropriate to many Christians when dates referred to events in non-Christian societies. Why should Buddhists, Hindus, Muslims, or others date time according to the birth of Jesus? In saving the Gregorian calendar as a widely observed international standard for reckoning time, while also avoiding the specifically Christian implications of the qualification’s Bc and ad, scholars replaced the birth of Jesus with the notion of ‘the common era’ and began to qualify dates as BCE (‘before the common era’) or Ce (“in the common era”). For the practical purpose of organizing time, BCE is the exact equivalent of Bc, and Ce is the exact equivalent of AD, but the term’s BCE and Ce have very different connotations than do Bc and AD.
The qualification’s BCE and Ce first came into general use after World War II (1939-1945) among biblical scholars, particularly those who studied Judaism and early Christianity in the period from the 1st century Bc (or BCE) and the 1st century ad (or Ce). From their viewpoint, this “common era” was an age when proponents of Jewish, Christian, and other religious faiths intensively interacted and debated with one another. Using the designations, BCE and Ce enabled them to continue employing a calendar familiar to them all while avoiding the suggestion that all historical time revolved around the birth of Jesus Christ. As the Gregorian calendar became prominent throughout the world in the 20th century, many peoples were eager to find terms more appealing to them than Bc and ad, and accordingly, the BCE and Ce usage became increasingly popular. This usage represents only the most recent of many efforts by the world’s peoples to devise meaningful frameworks of time.
Most scientists believe that the Earth, Sun, and all of the other planets and moons in the solar system formed about 4.6 billion years ago from a giant cloud of gas and dust known as the solar nebula. The gas and dust in this solar nebula originated in a star that ended its life in an explosion known as a supernova. The solar nebula consisted principally of hydrogen, the lightest element, but the nebula was also seeded with a smaller percentage of heavier elements, such as carbon and oxygen. All of the chemical elements we know were originally made in the star that became a supernova. Our bodies are made of these same chemical elements. Therefore, all of the elements in our solar system, including all of the elements in our bodies, originally came from this star-seeded solar nebula.
Due to the force of gravity tiny clumps of gas and dust began to form in the early solar nebula. As these clumps came together and grew larger, they caused the solar nebula to contract in on itself. The contraction caused the cloud of gas and dust to flatten in the shape of a disc. As the clumps continued to contract, they became very dense and hot. Eventually the s of hydrogen became so dense that they began to fuse in the innermost part of the cloud, and these nuclear reactions gave birth to the Sun. The fusion of hydrogen s in the Sun is the source of its energy.
Many scientists favour the planetesimal theory for how the Earth and other planets formed out of this solar nebula. This theory helps explain why the inner planets became rocky while the outer planets, except Pluto, are made up mostly of gases. The theory also explains why all of the planets orbit the Sun in the same plane.
According to this theory, temperatures decreased with increasing distance from the centre of the solar nebula. In the inner region, where Mercury, Venus, Earth, and Mars formed, temperatures were low enough that certain heavier elements, such as iron and the other heavy compounds that make up rock, could condense out-that is, could change from a gas to a solid or liquid. Due to the force of gravity, small clumps of this rocky material eventually came with the dust in the original solar nebula to form protoplanets or planetesimals (small rocky bodies). These planetesimals collided, broke apart, and re-formed until they became the four inner rocky planets. The inner region, however, was still too hot for other light elements, such as hydrogen and helium, to be retained. These elements could only exist in the outermost part of the disc, where temperatures were lower. As a result two of the outer planets-Jupiter and Saturn-are mostly made of hydrogen and helium, which are also the dominant elements in the atmospheres of Uranus and Neptune.
No comments:
Post a Comment