Other phenomena, which occur in the interaction between radiation and matter, can also be explained only by the quantum theory. Thus, modern physicists were forced to recognize that electromagnetic radiation can sometimes behave like a particle, and sometimes behave like a wave. The parallel concept-that matter also exhibits the same duality of having particle-like and wavelike characteristics-was developed in 1923 by the French physicist Louis Victor, Prince de Broglie.
Planck’s Constant is the fundamental physical constant, symbol h. It was first discovered (1900) by the German physicist Max Planck. Until that year, light in all forms had been thought to consist of waves. Planck noticed certain deviations from the wave theory of light on the part of radiations emitted by so-called ‘black bodies’, or perfect absorbers and emitters of radiation. He came to the conclusion that these radiations were emitted in discrete units of energy, called quanta. This conclusion was the first enunciation of the quantum theory. According to Planck, the energy of a quantum of light is equal to the frequency of the light multiplied by a constant. His original theory has since had abundant experimental verification, and the growth of the quantum theory has brought about a fundamental change in the physicist's concept of light and matter, both of which are now thought to combine the properties of waves and particles. Thus, Planck's constant has become as important to the investigation of particles of matter as to quanta of light, now called photons. The first successful measurement (1916) of Planck's constant was made by the American physicist Robert Millikan. The present accepted value of the constant is
h = 6.626 × 10-34 joule-second in the metre-kilogram-second system.
As each photon, particle of light energy, or energy that is generated by moving electric charges. Energy generated by moving charges is called electromagnetic radiation. Visible light is one kind of electromagnetic radiation. Other kinds of radiation include radio waves, infrared waves, and X-rays. All such radiation sometimes behaves like a wave and sometimes behaves like a particle. Scientists use the concept of a photon to describe the effects of radiation when it behaves like a particle.
Most photons are invisible to humans. Humans only see photons with energy levels that fall within a certain range. We describe these visible photons as visible light. Invisible photons include radio and television signals, photons that heat food in microwave ovens, the ultraviolet light that causes sunburn, and the X-rays doctors use to view a person’s bones.
The photon is an elementary particle, or a particle that cannot be split into anything smaller. It carries the electromagnetic force, and one of the four fundamental forces of nature, between particles. The electromagnetic force occurs between charged particles or between magnetic materials and charged particles. Electrically charged particles attract or repel each other by exchanging photons back and forth.
Photons are particles with no electrical charge and no mass, but they do have energy and momentum, a property that allows photons to affect other particles when they collide with them. Photons travel at the speed of light, which is about 300,000 km/sec (about 186,000 mi/sec). Only objects without mass can travel at the speed of light. Objects with mass must travel at slower speeds, and nothing can travel at speeds faster than the speed of light.
The energy of a photon is equal to the product of a constant number called Planck’s constant multiplied by the frequency, or number of vibrations per second, of the photon. Scientists write the equation for a photon’s energy as E=hv, where h is Planck’s Constant and v is the frequency. Photons with high frequencies, such as X rays, carry more energy than do photons with low frequencies, such as radio waves. Photons that are visible to the human eye have energy levels around one electron volt (eV) and frequencies from 1014 to 1015 Hz (hertz or cycles per second). The number 1014 is a 1 followed by 14 zeros. The frequency of visible photons corresponds to the colour of their light. Photons of violet light have the highest frequencies of visible light, while photons of red light have the lowest frequencies. Gamma rays, the highest-energy photons of all, have energies in the 1 GeV range (109 eV) and frequencies higher than 1018 Hz. Gamma rays are only produced in special experimental devices called particle accelerators and in outer space.
Although momentum is usually considered a property of objects with mass, photons also have momentum. Momentum determines the amount of force, or pressure, that an object exerts when it hits a surface. In classical physics, or physics that deals with the behaviour of objects we encounter in everyday life, momentum is equal to the product of the mass of an object multiplied by its velocity (the combination of its speed and direction). While photons do not have mass, scientists have found that they exert extremely small amounts of pressure when they strike surfaces. Scientists have redefined momentum to include the force exerted by photons, called light pressure or radiation pressure.
Philosophers from as far back in history as the Greeks of the 5th century Bc have thought about the nature of light. In the 1600's, scientists began to argue over whether light is made of particles or waves. In the 1860's, British physicist James Clerk Maxwell discovered electromagnetic waves, waves of electromagnetic energy that travel at the speed of light. He determined that light is made of these waves, and his theory seemed to settle the wave versus particle issue. His conclusion that light is made of waves is still valid. However, in 1900 German physicist Max Planck renewed the argument that light could also act like particles, and these particles became known as photons. He developed the idea of photons to explain why substances, when heated to higher and higher temperatures, would glow with light of different colours. The wave theory could not explain why the colours changed with temperature changes.
Most scientists did not pay attention to Planck’s theory until 1905, when Albert Einstein used the idea of photons to explain an interaction he had studied called the photoelectric effect. In this interaction, light shining on the surface of a metal causes the metal to emit electrons. Electrons escape the metal by absorbing energy from the light. Einstein showed that light behaves as particles in this situation. If the light behaved like waves, each electron could absorb many light waves and gain ever more energy. He found, however, that a more intense beam of light, with more light waves, did not give each electron more energy. Instead, more light caused the metal to release more electrons, each of which had the same amount of energy. Each electron had to be absorbing a small piece of the light beam, or a particle of light, and all these pieces had the same amount of energy. A beam of light with a higher frequency contained pieces of light with more energy, so when electrons absorbed these particles, they too had more energy. This could only be explained using the photon view of radiation, in which each electron absorbs a single photon and gains enough energy to escape the metal.
Today scientists believe that light behaves both as a wave and as a particle. Scientists detect photons as discrete particles, and photons interact with matter as particles. However, light travels in the form of waves. Some experiments reveal the wave properties of light; for example, in diffraction, light spreads out from a small opening in waves, much like waves of water would behave. Other experiments, such as Einstein’s study of the photoelectric effect, reveal light’s particle properties.
Photon particles of light energy, or energy that is generated by moving electric charges. Energy generated by moving charges is called electromagnetic radiation. Visible light is one kind of electromagnetic radiation. Other kinds of radiation include radio waves, infrared waves, and X rays. All such radiation sometimes behaves like a wave and sometimes behaves like a particle. Scientists use the concept of a photon to describe the effects of radiation when it behaves like a particle.
Most photons are invisible to humans. Humans only see photons with energy levels that fall within a certain range. We describe these visible photons as visible light. Invisible photons include radio and television signals, photons that heat food in microwave ovens, the ultraviolet light that causes sunburn, and the X rays doctors use to view a person’s bones.
The photon is an elementary particle, or a particle that cannot be split into anything smaller. It carries the electromagnetic force. One of the four fundamentals forces of nature, between particles. The electromagnetic force occurs between charged particles or between magnetic materials and charged particles. Electrically charged particles attract or repel each other by exchanging photons back and forth.
Photons are particles with no electrical charge and no mass, but they do have energy and momentum, a property that allows photons to affect other particles when they collide with them. Photons travel at the speed of light, which is about 300,000 km/sec (about 186,000 mi/sec). Only objects without mass can travel at the speed of light. Objects with mass must travel at slower speeds, and nothing can travel at speeds faster than the speed of light.
The energy of a photon is equal to the product of a constant number called Planck’s constant multiplied by the frequency, or number of vibrations per second, of the photon. Scientists write the equation for a photon’s energy as E=hv, where h is Planck’s Constant and v is the frequency. Photons with high frequencies, such as X-rays, carry more energy than do photons with low frequencies, such as radio waves. Photons that are visible to the human eye have energy levels around one electron volt (eV) and frequencies from 1014 to 1015 Hz (hertz or cycles per second). The number 1014 is a 1 followed by 14 zeros. The frequency of visible photons corresponds to the colour of their light. Photons of violet light have the highest frequencies of visible light, while photons of red light have the lowest frequencies. Gamma rays, the highest-energy photons of all, have energies in the 1 GeV range (109 eV) and frequencies higher than 1018 Hz. Gamma rays are only produced in special experimental devices called particle accelerators and in outer space.
Although momentum is usually considered a property of objects with mass, photons also have momentum. Momentum determines the amount of force, or pressure, that an object exerts when it hits a surface. In classical physics, or physics that deals with the behaviour of objects we encounter in everyday life, momentum is equal to the product of the mass of an object multiplied by its velocity (the combination of its speed and direction). While photons do not have mass, scientists have found that they exert extremely small amounts of pressure when they strike surfaces. Scientists have redefined momentum to include the force exerted by photons, called light pressure or radiation pressure.
Philosophers from as far back in history as the Greeks of the 5th century Bc have thought about the nature of light. In the 1600's, scientists began to argue over whether light is made of particles or waves. In the 1860's, British physicist James Clerk Maxwell discovered electromagnetic waves, waves of electromagnetic energy that travel at the speed of light. He determined that light is made of these waves, and his theory seemed to settle the wave versus particle issue. His conclusion that light is made of waves is still valid. However, in 1900 German physicist Max Planck renewed the argument that light could also act like particles, and these particles became known as photons. He developed the idea of photons to explain why substances, when heated to higher and higher temperatures, would glow with light of different colours. The wave theory could not explain why the colours changed with temperature changes.
Today scientists believe that light behaves both as a wave and as a particle. Scientists detect photons as discrete particles, and photons interact with matter as particles. However, light travels in the form of waves. Some experiments reveal the wave properties of light; for example, in diffraction, light spreads out from a small opening in waves, much like waves of water would behave. Other experiments, such as Einstein’s study of the photoelectric effect, reveal light’s particle properties.
Most synonymous with quantum theory is the Uncertainty Principle, in quantum mechanics, theory states that specifying simultaneously the position and momentum of a particle is impossible, such as an electron, with precision. Also called the indeterminacy principle, the theory further states that a more accurate determination of one quantity will result in a less precise measurement of the other, and that the product of both uncertainties is never less than Planck's constant, named after the German physicist Max Planck. Of very small magnitude, the uncertainty results from the fundamental nature of the particles being observed. In quantum mechanics, probability calculations therefore replace the exact calculations of classical mechanics.
Formulated in 1927 by the German physicist Werner Heisenberg, the uncertainty principle was of great significance in the development of quantum mechanics. Its philosophic implications of indeterminacy created a strong trend of mysticism among scientists who interpreted the concept as a violation of the fundamental law of cause and effect. Other scientists, including Albert Einstein, believed that the uncertainty involved in observation in no way contradicted the existence of laws governing the behaviour of the particles or the ability of scientists to discover these laws.
Of a final summation, science is a systematic study of anything that can be examined, tested, and verified. The word science is derived from the Latin word scire, meaning ‘to know.’ From its beginnings, science has developed into one of the greatest and most influential fields of human endeavour. Today different branches of science investigate almost everything that can be observed or detected, and science as a whole shapes the way we understand the universe, our planet, ourselves, and other living things.
Science develops through objective analysis, instead of through personal belief. Knowledge gained in science accumulates as time goes by, building on work carried out earlier. Some of this knowledge—such as our understanding of numbers-stretches back to the time of ancient civilizations, when scientific thought first began. Other scientific knowledge-such as our understanding of genes that cause cancer or of quarks (the smallest known building block of matter)-dates back less than 50 years. However, in all fields of science, old or new, researchers use the same systematic approach, known as the scientific method, to add to what is known.
During scientific investigations, scientists put together and compare new discoveries and existing knowledge. In most cases, new discoveries extend what is currently accepted, providing further evidence that existing ideas are correct. For example, in 1676 the English physicist Robert Hooke discovered that elastic objects, such as metal springs, stretches in proportion to the force that acts on them. Despite all the advances that have been made in physics since 1676, this simple law still holds true.
Scientists utilize existing knowledge in new scientific investigations to predict how things will behave. For example, a scientist who knows the exact dimensions of a lens can predict how the lens will focus a beam of light. In the same way, by knowing the exact makeup and properties of two chemicals, a researcher can predict what will happen when they combine. Sometimes scientific predictions go much further by describing objects or events that are not yet known. An outstanding instance occurred in 1869, when the Russian chemist Dmitry Mendeleyev drew up a periodic table of the elements arranged to illustrate patterns of recurring chemical and physical properties. Mendeleyev used this table to predict the existence and describe the properties of several elements unknown in his day, and when the elements were discovered several years later, his predictions proved to be correct.
In science, important advances can also be made when current ideas are shown to be wrong. A classic case of this occurred early in the 20th century, when the German geologist Alfred Wegener suggested that the continents were at one time connected, a theory known as continental drift. At the time, most geologists discounted Wegener's ideas, because the Earth's crust seemed to be fixed. Nonetheless, following the discovery of plate tectonics in the 1960s, in which scientists found that the Earth’s crust is made of moving plates, continental drift became an important part of geology.
Through advances like these, scientific knowledge is constantly added to and refined. As a result, science gives us an ever more detailed insight into the way the world around us works.
For a large part of recorded history, science had little bearing on people's everyday lives. Scientific knowledge was gathered for its own sake, and it had few practical applications. However, with the dawn of the Industrial Revolution in the 18th century, this rapidly changed. Today, science has a profound effect on the way we live, largely through technology-the use of scientific knowledge for practical purposes.
Some forms of technology have become so well established that forgetting the great scientific achievements that they represent is easy. The refrigerator, for example, owes its existence to a discovery that liquids take in energy when they evaporate, a phenomenon known as latent heat. The principle of latent heat was first exploited in a practical way in 1876, and the refrigerator has played a major role in maintaining public health ever since. The first automobile, dating from the 1880's, made use of many advances in physics and engineering, including reliable ways of generating high-voltage sparks, while the first computers emerged in the 1940's from simultaneous advances in electronics and mathematics.
Other fields of science also play an important role in the things we use or consume every day. Research in food technology has created new ways of preserving and flavouring what we eat. Research in industrial chemistry has created a vast range of plastics and other synthetic materials, which have thousands of uses in the home and in industry. Synthetic materials are easily formed into complex shapes and can be used to make machine, electrical, and automotive parts, scientific and industrial instruments, decorative objects, containers, and many other items.
Alongside these achievements, science has also brought about technology that helps save human life. The kidney dialysis machine enables many people to survive kidney diseases that would once have proved fatal, and artificial valves allow sufferers of coronary heart disease to return to active living. Biochemical research is responsible for the antibiotics and vaccinations that protect us from infectious diseases, and for a wide range of other drugs used to combat specific health problems. As a result, the majority of people on the planet now live longer and healthier lives than ever before.
However, scientific discoveries can also have a negative impact in human affairs. Over the last hundred years, some of the technological advances that make life easier or more enjoyable have proved to have unwanted and often unexpected long-term effects. Industrial and agricultural chemicals pollute the global environment, even in places as remote as Antarctica, and city air is contaminated by toxic gases from vehicle exhausts. The increasing pace of innovation means that products become rapidly obsolete, adding to a rising tide of waste. Most significantly of all, the burning of fossil fuels such as coal, oil, and natural gas releases into the atmosphere carbon dioxide and other substances known as greenhouse gases. These gases have altered the composition of the entire atmosphere, producing global warming and the prospect of major climate change in years to come.
Science has also been used to develop technology that raises complex ethical questions. This is particularly true in the fields of biology and medicine. Research involving genetic engineering, cloning, and in vitro fertilization gives scientists the unprecedented power to bring about new life, or to devise new forms of living things. At the other extreme, science can also generate technology that is deliberately designed to harm or to kill. The fruits of this research include chemical and biological warfare, and nuclear weapons, by far the most destructive weapons that the world has ever known.
Scientific research can be divided into basic science, also known as pure science, and applied science. In basic science, scientists working primarily at academic institutions pursue research simply to satisfy the thirst for knowledge. In applied science, scientists at industrial corporations conduct research to achieve some kind of practical or profitable gain.
In practice, the division between basic and applied science is not always clear-cut. This is because discoveries that initially seem to have no practical use often develop one as time goes by. For example, superconductivity, the ability to conduct electricity with no resistance, was little more than a laboratory curiosity when Dutch physicist Heike Kamerlingh Onnes discovered it in 1911. Today superconducting electromagnets are used in an ever-increasing number of important applications, from diagnostic medical equipment to powerful particle accelerators.
Scientists study the origin of the solar system by analysing meteorites and collecting data from satellites and space probes. They search for the secrets of life processes by observing the activity of individual molecules in living cells. They observe the patterns of human relationships in the customs of aboriginal tribes. In each of these varied investigations the questions asked and the means employed to find answers are different. All the inquiries, however, share a common approach to problem solving known as the scientific method. Scientists may work alone or they may collaborate with other scientists. In all cases, a scientist’s work must measure up to the standards of the scientific community. Scientists submit their findings to science forums, such as science journals and conferences, in order to subject the findings to the scrutiny of their peers.
Whatever the aim of their work, scientists use the same underlying steps to organize their research: (1) they make detailed observations about objects or processes, either as they occur in nature or as they take place during experiments; (2) they collect and analyse the information observed; and (3) they formulate a hypothesis that explains the behaviour of the phenomena observed.
A scientist begins an investigation by observing an object or an activity. Observation typically involves one or more of the humans senses-hearing, sights, smells, taste, and touch. Scientists typically use tools to aid in their observations. For example, a microscope helps view objects too small to be seen with the unaided human eye, while a telescope views objects too far away to be seen by the unaided eye.
Scientists typically apply their observation skills to an experiment. An experiment is any kind of trial that enables scientists to control and change at will the conditions under which events occur. It can be something extremely simple, such as heating a solid to see when it melts, or something highly complex, such as bouncing a radio signal off the surface of a distant planet. Scientists typically repeat experiments, sometimes many times, in order to be sure that the results were not affected by unforeseen factors.
Most experiments involve real objects in the physical world, such as electric circuits, chemical compounds, or living organisms. However, with the rapid progress in electronics, computer simulations can now carry out some experiments instead. If they are carefully constructed, these simulations or models can accurately predict how real objects will behave.
One advantage of a simulation is that it allows experiments to be conducted without any risks. Another is that it can alter the apparent passage of time, speeding up or slowing natural processes. This enables scientists to investigate things that happen very gradually, such as evolution in simple organisms, or ones that happen almost instantaneously, such as collisions or explosions.
During an experiment, scientists typically make measurements and collect results as they work. This information, known as data, can take many forms. Data may be a set of numbers, such as daily measurements of the temperature in a particular location or a description of side effects in an animal that has been given an experimental drug. Scientists typically use computers to arrange data in ways that make the information easier to understand and analyse. Data may be arranged into a diagram such as a graph that shows how one quantity (body temperature, for instance) varies in relation to another quantity (days since starting a drug treatment). A scientist flying in a helicopter may collect information about the location of a migrating herd of elephants in Africa during different seasons of a year. The data collected maybe in the form of geographic coordinates that can be plotted on a map to provide the position of the elephant herd at any given time during a year.
Scientists use mathematics to analyse the data and help them interpret their results. The types of mathematics used include statistics, which is the analysis of numerical data, and probability, which calculates the likelihood that any particular event will occur.
Once an experiment has been carried out and data collected and analysed, scientists look for whatever pattern their results produce and try to formulate a hypothesis that explains all the facts observed in an experiment. In developing a hypothesis, scientists employ methods of induction to generalize from the experiment’s results to predict future outcomes, and deduction to infer new facts from experimental results.
Formulating a hypothesis may be difficult for scientists because there may not be enough information provided by a single experiment, or the experiment’s conclusion may not fit old theories. Sometimes scientists do not have any prior idea of a hypothesis before they start their investigations, but often scientists start out with a working hypothesis that will be proved or disproved by the results of the experiment. Scientific hypotheses can be useful, just as hunches and intuition can be useful in everyday life. Yet they can also be problematic because they tempt scientists, either deliberately or unconsciously, to favour data that support their ideas. Scientists generally take great care to avoid bias, but it remains an ever-present threat. Throughout the history of science, numerous researchers have fallen into this trap, either in the hope or self-advancement that they firmly believe their ideas to be true.
If a hypothesis is borne out by repeated experiments, it becomes a theory-an explanation that seems to fit with the facts consistently. The ability to predict new facts or events is a key test of a scientific theory. In the 17th century German astronomer Johannes Kepler proposed three theories concerning the motions of planets. Kepler’s theories of planetary orbits were confirmed when they were used to predict the future paths of the planets. On the other hand, when theories fail to provide suitable predictions, these failures may suggest new experiments and new explanations that may lead to new discoveries. For instance, in 1928 British microbiologist Frederick Griffith discovered that the genes of dead virulent bacteria could transform harmless bacteria into virulent ones. The prevailing theory at the time was that genes were made of proteins. Nevertheless, studies carried through by Canadian-born American bacteriologist Oswald Avery and colleagues in the 1930's repeatedly showed that the transforming gene was active even in bacteria from which protein was removed. The failure to prove that genes were composed of proteins spurred Avery to construct different experiments and by 1944 Avery and his colleagues had found that genes were composed of deoxyribonucleic acid (DNA), not proteins.
If other scientists do not have access to scientific results, the research may as well not have been put into effect at all. Scientists need to share the results and conclusions of their work so that other scientists can debate the implications of the work and use it to spur new research. Scientists communicate their results with other scientists by publishing them in science journals and by networking with other scientists to discuss findings and debate issues.
In science, publication follows a formal procedure that has set rules of its own. Scientists describe research in a scientific paper, which explains the methods used, the data collected, and the conclusions that can be drawn. In theory, the paper should be detailed enough to enable any other scientist to repeat the research so that the findings can be independently checked.
Scientific papers usually begin with a brief summary, or abstract, that describes the findings that follow. Abstracts enable scientists to consult papers quickly, without having to read them in full. At the end of most papers is a list of citations-bibliographic references that acknowledge earlier work that has been drawn on in the course of the research. Citations enable readers to work backwards through a chain of research advancements to verify that each step is soundly based.
Scientists typically submit their papers to the editorial board of a journal specializing in a particular field of research. Before the paper is accepted for publication, the editorial board sends it out for peer review. During this procedure a panel of experts, or referees, assesses the paper, judging whether or not the research has been carried out in a fully scientific manner. If the referees are satisfied, publication goes ahead. If they have reservations, some of the research may have to be repeated, but if they identify serious flaws, the entire paper may be rejected for publication.
The peer-review process plays a critical role because it ensures high standards of scientific method. However, it can be a contentious area, as it allows subjective views to become involved. Because scientists are human, they cannot avoid developing personal opinions about the value of each other’s work. Furthermore, because referees tend to be senior figures, they may be less than welcoming to new or unorthodox ideas.
Once a paper has been accepted and published, it becomes part of the vast and ever-expanding body of scientific knowledge. In the early days of science, new research was always published in printed form, but today scientific information spreads by many different means. Most major journals are now available via the Internet (a network of linked computers), which makes them quickly accessible to scientists all over the world.
When new research is published, it often acts as a springboard for further work. Its impact can then be gauged by seeing how often the published research appears as a cited work. Major scientific breakthroughs are cited thousands of times a year, but at the other extreme, obscure pieces of research may be cited rarely or not at all. However, citation is not always a reliable guide to the value of scientific work. Sometimes a piece of research will go largely unnoticed, only to be rediscovered in subsequent years. Such was the case for the work on genes done by American geneticist Barbara McClintock during the 1940's. McClintock discovered a new phenomenon in corn cells known as transposable genes, sometimes referred to as jumping genes. McClintock observed that a gene could move from one chromosome to another, where it would break the second chromosome at a particular site, insert itself there, and influence the function of an adjacent gene. Her work was largely ignored until the 1960's when scientists found that transposable genes were a primary means for transferring genetic material in bacteria and more complex organisms. McClintock was awarded the 1983 Nobel Prize in physiology or medicine for her work in transposable genes, more than 35 years after doing the research.
In addition to publications, scientists form associations with other scientists from particular fields. Many scientific organizations arrange conferences that bring together scientists to share new ideas. At these conferences, scientists present research papers and discuss their implications. In addition, science organizations promote the work of their members by publishing newsletters and Web sites; networking with journalists at newspapers, magazines, and television stations to help them understand new findings; and lobbying lawmakers to promote government funding for research.
The oldest surviving science organization is the Academia dei Lincei, in Italy, which was established in 1603. The same century also saw the inauguration of the Royal Society of London, founded in 1662, and the Académie des Sciences de Paris, founded in 1666. American scientific societies date back to the 18th century, when American scientist and diplomat Benjamin Franklin founded a philosophical club in 1727. In 1743 this organization became the American Philosophical Society, which still exists today.
In the United States, the American Association for the Advancement of Science (AAAS) plays a key role in fostering the public understanding of science and in promoting scientific research. Founded in 1848, it has nearly 300 affiliated organizations, many of which originally developed from AAAS special-interest groups.
Since the late 19th century, communication among scientists has also been improved by international organizations, such as the International Bureau of Weights and Measures, founded in 1873, the International Council of Research, founded in 1919, and the World Health Organization, founded in 1948. Other organizations act as international forums for research in particular fields. For example, the Intergovernmental Panel on Climate Change (IPCC), established in 1988, assesses research on how climate change occurs, and what affects change is likely to have on humans and their environment.
Classifying sciences involves arbitrary decisions because the universe is not easily split into separate compartments. This article divides science into five major branches: mathematics, physical sciences, earth sciences, life sciences, and social sciences. A sixth branch, technology, draws on discoveries from all areas of science and puts them to practical use. Each of these branches itself consists of numerous subdivisions. Many of these subdivisions, such as astrophysics or biotechnology, combine overlapping disciplines, creating yet more areas of research. For additional information on individual sciences, refer to separate articles highlighted in the text.
The mathematical sciences investigate the relationships between things that can be measured or quantified in either a real or abstract form. Pure mathematics differs from other sciences because it deals solely with logic, rather than with nature's underlying laws. However, because it can be used to solve so many scientific problems, mathematics is usually considered to be a science itself.
Central to mathematics is arithmetic, the use of numbers for calculation. In arithmetic, mathematicians combine specific numbers to produce a result. A separate branch of mathematics, called algebra, works in a similar way, but uses general expressions that apply to numbers as a whole. For example, if there are three separate items on a restaurant bill, simple arithmetic produces the total amount to be paid. Yet the total can also be calculated by using an algebraic formula. A powerful and flexible tool, algebra enables mathematicians to solve highly complex problems in every branch of science.
Geometry investigates objects and the spaces around them. In its simplest form, it deals with objects in two or three dimensions, such as lines, circles, cubes, and spheres. Geometry can be extended to cover abstractions, including objects in many dimensions. Although we cannot perceive these extra dimensions ourselves, the logic of geometry still holds.
In geometry, working out the exact area of a rectangle or the gradient is easy (slope) of a line, but there are some problems that geometry cannot solve by conventional means. For example, geometry cannot calculate the exact gradient at a point on a curve, or the area that the curve bounds. Scientists find that calculating quantities like this helps them understand physical events, such as the speed of a rocket at any particular moment during its acceleration.
To solve these problems, mathematicians use calculus, which deals with continuously changing quantities, such as the position of a point on a curve. Its simultaneous development in the 17th century by English mathematician and physicist Isaac Newton and German philosopher and mathematician Gottfried Wilhelm Leibniz enabled the solution of many problems that had been insoluble by the methods of arithmetic, algebra, and geometry. Among the advances that calculus helped develop were the determination of Newton’s laws of motion and the theory of electromagnetism.
The physical sciences investigate the nature and behaviour of matter and energy on a vast range of size and scale. In physics itself, scientists study the relationships between matter, energy, force, and time in an attempt to explain how these factors shape the physical behaviour of the universe. Physics can be divided into many branches. Scientists study the motion of objects, a huge branch of physics known as mechanics that involves two overlapping sets of scientific laws. The laws of classical mechanics govern the behaviour of objects in the macroscopic world, which includes everything from billiard balls to stars, while the laws of quantum mechanics govern the behaviour of the particles that make up individual atoms.
Other branches of physics focus on energy and its large-scale effects. Thermodynamics is the study of heat and the effects of converting heat into other kinds of energy. This branch of physics has a host of highly practical applications because heat is often used to power machines. Physicists also investigate electrical energy and energy that are carried in electromagnetic waves. These include radio waves, light rays, and X rays-forms of energy that are closely related and that all obey the same set of rules.
Chemistry is the study of the composition of matter and the way different substances interact-subjects that involve physics on an atomic scale. In physical chemistry, chemists study the way physical laws govern chemical change, while in other branches of chemistry the focus is on particular chemicals themselves. For example, inorganic chemistry investigates substances found in the nonliving world and organic chemistry investigates carbon-based substances. Until the 19th century, these two areas of chemistry were thought to be separate and distinct, but today chemists routinely produce organic chemicals from inorganic raw materials. Organic chemists have learned how to synthesize many substances that are found in nature, together with hundreds of thousands that are not, such as plastics and pesticides. Many organic compounds, such as reserpine, a drug used to treat hypertension, cost less to produce by synthesizing from inorganic raw materials than to isolate from natural sources. Many synthetic medicinal compounds can be modified to make them more effective than their natural counterparts, with less harmful side effects.
The branch of chemistry known as biochemistry deals solely with substances found in living things. It investigates the chemical reactions that organisms use to obtain energy and the reactions up which they use to build themselves. Increasingly, this field of chemistry has become concerned not simply with chemical reactions themselves but also with how the shape of molecules influences the way they work. The result is the new field of molecular biology-one of the fastest-growing sciences today.
Physical scientists also study matter elsewhere in the universe, including the planets and stars. Astronomy is the science of the heavens, while astrophysics is a branch of astronomy that investigates the physical and chemical nature of stars and other objects. Astronomy deals largely with the universe as it appears today, but a related science called cosmology looks back in time to answer the greatest scientific questions of all: how the universe began and how it came to be as it is today.
The earth sciences examine the structure and composition of our planet, and the physical processes that have helped to shape it. Geology focuses on the structure of Earth, while geography is the study of everything on the planet's surface, including the physical changes that humans have brought about from, for example, farming, mining, or deforestation. Scientists in the field of geomorphology study Earth's present landforms, while mineralogists investigate the minerals in Earth's crust and the way they formed.
Water dominates Earth's surface, making it an important subject for scientific research. Oceanographers carry out research in the oceans, while scientists working in the field of hydrology investigate water resources on land, a subject of vital interest in areas prone to drought. Glaciologists study Earth's icecaps and mountain glaciers, and the effects that ice have when it forms, melts, or moves. In atmospheric science, meteorology deals with day-to-day changes in weather, but climatology investigates changes in weather patterns over the longer term.
When living things die their remains are sometimes preserved, creating a rich store of scientific information. Palaeontology is the study of plant and animal remains that have been preserved in sedimentary rock, often millions of years ago. Paleontologists study things long dead and their findings shed light on the history of evolution and on the origin and development of humans. A related science, called palynology, is the study of fossilized spores and pollen grains. Scientists study these tiny structures to learn the types of plants that grew in certain areas during Earth’s history, which also helps identify what Earth’s climates were like in the past.
The life sciences include all those areas of study that deal with living things. Biology is the general study of the origin, development, structure, function, evolution, and distribution of living things. Biology may be divided into botany, the study of plants; zoology, the study of animals; and microbiology, the study of the microscopic organisms, such as bacteria, viruses, and fungi. Many single-celled organisms play important roles in life processes and thus are important to more complex forms of life, including plants and animals.
Genetics is the branch of biology that studies the way in which characteristics are transmitted from an organism to its offspring. In the latter half of the 20th century, new advances made it easier to study and manipulate genes at the molecular level, enabling scientists to catalogue all the genes found in each cell of the human body. Exobiology, a new and still speculative field, is the study of possible extraterrestrial life. Although Earth remains the only place known to support life, many believe that it is only a matter of time before scientists discover life elsewhere in the universe.
While exobiology is one of the newest life sciences, anatomy is one of the oldest. It is the study of plant and animal structures, carried out by dissection or by using powerful imaging techniques. Gross anatomy deals with structures that are large enough to see, while microscopic anatomy deals with much smaller structures, down to the level of individual cells.
Physiology explores how living things’ work. Physiologists study processes such as cellular respiration and muscle contraction, as well as the systems that keep these processes under control. Their work helps to answer questions about one of the key characteristics of life-the fact that most living things maintain a steady internal state when the environment around them constantly changes.
Together, anatomy and physiology form two of the most important disciplines in medicine, the science of treating injury and human disease. General medical practitioners have to be familiar with human biology as a whole, but medical science also includes a host of clinical specialties. They include sciences such as cardiology, urology, and oncology, which investigate particular organs and disorders, and pathology, the general study of disease and the changes that it causes in the human body.
As well as working with individual organisms, life scientists also investigate the way living things interact. The study of these interactions, known as ecology, has become a key area of study in the life sciences as scientists become increasingly concerned about the disrupting effects of human activities on the environment.
The social sciences explore human society past and present, and the way human beings behave. They include sociology, which investigates the way society is structured and how it functions, as well as psychology, which is the study of individual behaviour and the mind. Social psychology draws on research in both these fields. It examines the way society influence’s people's behaviour and attitudes.
Another social science, anthropology, looks at humans as a species and examines all the characteristics that make us what we are. These include not only how people relate to each other but also how they interact with the world around them, both now and in the past. As part of this work, anthropologists often carry out long-term studies of particular groups of people in different parts of the world. This kind of research helps to identify characteristics that all human beings share and those that are the products of local culture, learned and handed on from generation to generation.
The social sciences also include political science, law, and economics, which are products of human society. Although far removed from the world of the physical sciences, all these fields can be studied in a scientific way. Political science and law are uniquely human concepts, but economics has some surprisingly close parallels with ecology. This is because the laws that govern resource use, productivity, and efficiency do not operate only in the human world, with its stock markets and global corporations, but in the nonhuman world as well.
In technology, scientific knowledge is put to practical ends. This knowledge comes chiefly from mathematics and the physical sciences, and it is used in designing machinery, materials, and industrial processes. Overall, this work is known as engineering, a word dating back to the early days of the Industrial Revolution, when an ‘engine’ was any kind of machine.
Engineering has many branches, calling for a wide variety of different skills. For example, aeronautical engineers need expertise in the science of fluid flow, because aeroplanes fly through air, which is a fluid. Using wind tunnels and computer models, aeronautical engineers strive to minimize the air resistance generated by an aeroplane, while at the same time maintaining a sufficient amount of lift. Marine engineers also need detailed knowledge of how fluids behave, particularly when designing submarines that have to withstand extra stresses when they dive deep below the water’s surface. In civil engineering, stress calculations ensure that structures such as dams and office towers will not collapse, particularly if they are in earthquake zones. In computing, engineering takes two forms: hardware design and software design. Hardware design refers to the physical design of computer equipment (hardware). Software design is carried out by programmers who analyse complex operations, reducing them to a series of small steps written in a language recognized by computers.
In recent years, a completely new field of technology has developed from advances in the life sciences. Known as biotechnology, it involves such varied activities as genetic engineering, the manipulation of genetic material of cells or organisms, and cloning, the formation of genetically uniform cells, plants, or animals. Although still in its infancy, many scientists believe that biotechnology will play a major role in many fields, including food production, waste disposal, and medicine.
Science exists because humans have a natural curiosity and an ability to organize and record things. Curiosity is a characteristic shown by many other animals, but organizing and recording knowledge is a skill demonstrated by humans alone.
During prehistoric times, humans recorded information in a rudimentary way. They made paintings on the walls of caves, and they also carved numerical records on bones or stones. They may also have used other ways of recording numerical figures, such as making knots in leather cords, but because these records were perishable, no traces of them remain. However, with the invention of writing about 6,000 years ago, a new and much more flexible system of recording knowledge appeared.
The earliest writers were the people of Mesopotamia, who lived in a part of present-day Iraq. Initially they used a pictographic script, inscribing tallies and lifelike symbols on tablets of clay. With the passage of time, these symbols gradually developed into cuneiform, a much more stylized script composed of wedge-shaped marks.
Because clay is durable, many of these ancient tablets still survive. They show that when writing first appeared that the Mesopotamians already had a basic knowledge of mathematics, astronomy, and chemistry, and that they used symptoms to identify common diseases. During the following 2,000 years, as Mesopotamian culture became increasingly sophisticated, mathematics in particular became a flourishing science. Knowledge accumulated rapidly, and by 1000 Bc the earliest private libraries had appeared.
Southwest of Mesopotamia, in the Nile Valley of northeastern Africa, the ancient Egyptians developed their own form of pictographic script, writing on papyrus, or inscribing text in stone. Written records from 1500 Bc show that, like the Mesopotamians, the Egyptians had a detailed knowledge of diseases. They were also keen astronomers and skilled mathematicians-a fact demonstrated by the almost perfect symmetry of the pyramids and by other remarkable structures they built.
For the peoples of Mesopotamia and ancient Egypt, knowledge was recorded mainly for practical needs. For example, astronomical observations enabled the development of early calendars, which helped in organizing the farming year. It is, nonetheless, that in ancient Greece, often recognized as the birthplace of Western science, a new kind of scientific enquiry began. Here, philosophers sought knowledge largely for its own sake.
Thales of Miletus were one of the first Greek philosophers to seek natural causes for natural phenomena. He travelled widely throughout Egypt and the Middle East and became famous for predicting a solar eclipse that occurred in 585 Bc. At a time when people regarded eclipses as ominous, inexplicable, and frightening events, his prediction marked the start of rationalism, a belief that the universe can be explained by reason alone. Rationalism remains the hallmark of science to this day.
Thales and his successors speculated about the nature of matter and of Earth itself. Thales himself believed that Earth was a flat disk floating on water, but the followers of Pythagoras, one of ancient Greece's most celebrated mathematicians, believed that Earth was spherical. These followers also thought that Earth moved in a circular orbit-not around the Sun but around a central fire. Although flawed and widely disputed, this bold suggestion marked an important development in scientific thought: the idea that Earth might not be, after all, the Centre of the universe. At the other end of the spectrum of scientific thought, the Greek philosopher Leucippus and his student Democritus of Abdera proposed that all matter be made up of indivisible atoms, more than 2,000 years before the idea became a part of modern science.
As well as investigating natural phenomena, ancient Greek philosophers also studied the nature of reasoning. At the two great schools of Greek philosophy in Athens-the Academy, founded by Plato, and the Lyceum, founded by Plato's pupil Aristotle-students learned how to reason in a structured way using logic. The methods taught at these schools included induction, which involve taking particular cases and using them to draw general conclusions, and deduction, the process of correctly inferring new facts from something already known.
In the two centuries that followed Aristotle's death in 322 Bc, Greek philosophers made remarkable progress in a number of fields. By comparing the Sun's height above the horizon in two different places, the mathematician, astronomer, and geographer Eratosthenes calculated Earth's circumference, producing a figure accurate to within 1 percent. Another celebrated Greek mathematician, Archimedes, laid the foundations of mechanics. He also pioneered the science of hydrostatics, the study of the behaviour of fluids at rest. In the life sciences, Theophrastus founded the science of botany, providing detailed and vivid descriptions of a wide variety of plant species as well as investigating the germination process in seeds.
By the 1st century Bc, Roman power was growing and Greek influence had begun to wane. During this period, the Egyptian geographer and astronomer Ptolemy charted the known planets and stars, putting Earth firmly at the Centre of the universe, and Galen, a physician of Greek origin, wrote important works on anatomy and physiology. Although skilled soldiers, lawyers, engineers, and administrators, the Romans had little interest in basic science. As a result, scientific growth made little advancement in the days of the Roman Empire. In Athens, the Lyceum and Academy were closed down in ad 529, bringing the first flowering of rationalism to an end.
For more than nine centuries, from about ad 500 to 1400, Western Europe made only a minor contribution to scientific thought. European philosophers became preoccupied with alchemy, a secretive and mystical pseudoscience that held out the illusory promise of turning inferior metals into gold. Alchemy did lead to some discoveries, such as sulfuric acid, which was first described in the early 1300s, but elsewhere, particularly in China and the Arab world, much more significant progress in the sciences was made.
Chinese science developed in isolation from Europe, and followed a different pattern. Unlike the Greeks, who prized knowledge as an end to self-splendour, the Chinese excelled at turning scientific discoveries to practical ends. The list of their technological achievements is dazzling: it includes the compass, invented in about AD 270; wood-block printing, developed around 700, and gunpowder and movable type, both invented around the year 1000. The Chinese were also capable mathematicians and excellent astronomers. In mathematics, they calculated the value of pi to within seven decimal places by the year 600, while in astronomy, one of their most celebrated observations was that of the supernova, or stellar explosion, that took place in the Crab Nebula in 1054. China was also the source of the world's oldest portable star map, dating from about 940.
The Islamic world, which in medieval times extended as far west as Spain, also produced many scientific breakthroughs. The Arab mathematician Muhammad al -Khwarizmi introduced Hindu-Arabic numerals to Europe many centuries after they had been devised in southern Asia. Unlike the numerals used by the Romans, Hindu-Arabic numerals include zero, a mathematical device unknown in Europe at the time. The value of Hindu-Arabic numerals depends on their place: in the number 300, for example, the numeral three is worth ten times as much as in 30. Al-Khwarizmi also wrote on algebra (itself derived from the Arab word al-jabr), and his name survives in the word algorithm, a concept of great importance in modern computing.
In astronomy, Arab observers charted the heavens, giving many of the brightest stars the names we use today, such as Aldebaran, Altair, and Deneb. Arab scientists also explored chemistry, developing methods to manufacture metallic alloys and test the quality and purity of metals. As in mathematics and astronomy, Arab chemists left their mark in some of the names they used-alkali and alchemy, for example, are both words of Arabic origin. Arab scientists also played a part in developing physics. One of the most famous Egyptian physicists, Alhazen, published a book that dealt with the principles of lenses, mirrors, and other devices used in optics. In this work, he rejected the then-popular idea that eyes give out light rays. Instead, he correctly deduced that eyes work when light rays enter the eye from outside.
In Europe, historians often attribute the rebirth of science to a political event—the capture of Constantinople (now İIstanbul) by the Turks in 1453. At the time, Constantinople was the capital of the Byzantine Empire and a major seat of learning. Its downfall led to an exodus of Greek scholars to the West. In the period that followed, many scientific works, including those originally from the Arab world, were translated into European languages. Through the invention of the movable type printing press by Johannes Gutenberg around 1450, copies of these texts became widely available.
The Black Death, a recurring outbreak of bubonic plague that began in 1347, disrupted the progress of science in Europe for more than two centuries. Yet in 1543 two books were published that had a profound impact on scientific progress. One was De Corporis Humani Fabrica (On the Structure of the Human Body, 7 volumes, 1543), by the Belgian anatomist Andreas Vesalius. Vesalius studied anatomy in Italy, and his masterpiece, which was illustrated by superb woodcuts, corrected errors and misunderstandings about the body before which had persisted since the time of Galen more than 1,300 years. Unlike Islamic physicians, whose religion prohibited them from dissecting human cadavers, Vesalius investigated the human body in minute detail. As a result, he set new standards in anatomical science, creating a reference work of unique and lasting value.
The other book of great significance published in 1543 was De Revolutionibus Orbium Coelestium (On the Revolutions of the Heavenly Spheres), written by the Polish astronomer Nicolaus Copernicus. In it, Copernicus rejected the idea that Earth was the Centre of the universe, as proposed by Ptolemy in the 1st century Bc. Instead, he set out to prove that Earth, together with the other planets, follows orbits around the Sun. Other astronomers opposed Copernicus's ideas, and more ominously, so did the Roman Catholic Church. In the early 1600's, the church placed the book on a list of forbidden works, where it remained for more than two centuries. Despite this ban and despite the book's inaccuracies (for instance, Copernicus believed that Earth's orbit was circular rather than elliptical), De Revolutionibus remained a momentous achievement. It also marked the start of a conflict between science and religion since which has dogged Western thought ever.
In the first decade of the 17th century, the invention of the telescope provided independent evidence to support Copernicus's views. Italian physicist and astronomer Galileo Galilei used the new device to remarkable effect. He became the first person to observe satellites circling Jupiter, the first to make detailed drawings of the surface of the Moon, and the first to see how Venus waxes and wanes as it circles the Sun.
These observations of Venus helped to convince Galileo that Copernicus’s Sun-centred view of the universe had been correct, but he fully understood the danger of supporting such heretical ideas. His Dialogue on the Two Chief World Systems, Ptolemaic and Copernican, published in 1632, was carefully crafted to avoid controversy. Even so, he was summoned before the Inquisition (tribunal established by the pope for judging heretics) the following year and, under threat of torture, forced to recant.
In less contentious areas, European scientists made rapid progress on many fronts in the 17th century. Galileo himself investigated the laws governing falling objects, and discovered that the duration of a pendulum's swing is constant for any given length. He explored the possibility of using this to control a clock, an idea that his son put into practice in 1641. Two years later another Italian, mathematician and physicist Evangelists Torricelli, made the first barometer. In doing so he discovered atmospheric pressure and produced the first artificial vacuum known to science. In 1650 German physicist Otto von Guericke invented the air pump. He is best remembered for carrying out a demonstration of the effects of atmospheric pressure. Von Guericke joined two large, hollow bronze hemispheres, and then pumped out the air within them to form a vacuum. To illustrate the strength of the vacuum, von Guericke showed how two teams of eight horses pulling in opposite directions could not separate the hemispheres. Yet the hemispheres fell apart as soon as air was let in.
Throughout the 17th century major advances occurred in the life sciences, including the discovery of the circulatory system by the English physician William Harvey and the discovery of microorganisms by the Dutch microscope maker Antoni van Leeuwenhoek. In England, Robert Boyle established modern chemistry as a full-fledged science, while in France, philosopher and scientist René Descartes made numerous discoveries in mathematics, as well as advancing the case for rationalism in scientific research.
Seemingly, the century's greatest achievements came in 1665, when the English physicist and mathematician Isaac Newton fled from Cambridge to his rural birthplace in Woolsthorpe to escape an epidemic of the plague. There, in the course of a single year, he made a series of extraordinary breakthroughs, including new theories about the nature of light and gravitation and the development of calculus. Newton is perhaps best known for his proof that the force of gravity extends throughout the universe and that all objects attract each other with a precisely defined and predictable force. Gravity holds the Moon in its orbit around the Earth and is the principal cause of the Earth’s tides. These discoveries revolutionized how people viewed the universe and they marked the birth of modern science.
Newton’s work demonstrated that nature was governed by basic rules that could be identified using the scientific method. This new approach to nature and discovery liberated 18th-century scientists from passively accepting the wisdom of ancient writings or religious authorities that had never been tested by experiment. In what became known as the Age of Reason, or the Age of Enlightenment, scientists in the 18th century began to apply rational thought actively, careful observation, and experimentation to solve a variety of problems.
Advances in the life sciences saw the gradual erosion of the theory of spontaneous generation, a long-head notion that life could spring from nonliving matter. It also brought the beginning of scientific classification, pioneered by the Swedish naturalist Carolus Linnaeus, who classified close to 12,000 living plants and animals into a systematic arrangement.
By 1700 the first steam engine had been built. Improvements in the telescope enabled German-born British astronomer Sir William Herschel to discover the planet Uranus in 1781. Throughout the 18th century science began to play an increasing role in everyday life. New manufacturing processes revolutionized the way that products were made, heralding the Industrial Revolution. In An Inquiry Into the Nature and Causes of the Wealth of Nations, published in 1776, British economist Adam Smith stressed the advantages of division of labour and advocated the use of machinery to increase production. He urged governments to allow individuals to compete within a free market in order to produce fair prices and maximum social benefit. Smith’s work for the first time gave economics the stature of an independent subject of study and his theories greatly influenced the course of economic thought for more than a century.
With knowledge in all branches of science accumulating rapidly, scientists began to specialize in particular fields. Specialization did not necessarily mean that discoveries were specializing as well: From the 19th century onward, research began to uncover principles that unite the universe as a whole.
In chemistry, one of these discoveries was a conceptual one: that all matter is made of atoms. Originally debated in ancient Greece, atomic theory was revived in a modern form by the English chemist John Dalton in 1803. Dalton provided clear and convincing chemical proof that such particles exist. He discovered that each atom has a characteristic mass and that atoms remain unchanged when they combine with other atoms to form compound substances. Dalton used atomic theory to explain why substances always combine in fixed proportions-a field of study known as quantitative chemistry. In 1869 Russian chemist Dmitry Mendeleyev used Dalton’s discoveries about atoms and their behaviour to draw up his periodic table of the elements.
Other 19th-century discoveries in chemistry included the world's first synthetic fertilizer, manufactured in England in 1842. In 1846 German chemist Christian Schoenbein accidentally developed the powerful and unstable explosive nitrocellulose. The discovery occurred after he had spilled a mixture of nitric and sulfuric acids and then mopped it up with a cotton apron. After the apron had been hung up to dry, it exploded. He later learned that the cellulose in the cotton apron combined with the acids to form a highly flammable explosive.
In 1828 the German chemist Friedrich Wöhler showed that making carbon-containing was possible, organic compounds from inorganic ingredients, a breakthrough that opened up an entirely new field of research. By the end of the 19th century, hundreds of organic compounds had been synthesized, including mauve, magenta, and other synthetic dyes, as well as aspirin, still one of the world's most useful drugs.
In physics, the 19th century is remembered chiefly for research into electricity and magnetism, which were pioneered by physicists such as Michael Faraday and James Clerk Maxwell of Great Britain. In 1821 Faraday demonstrated that a moving magnet could set an electric current flowing in a conductor. This experiment and others he performed led to the development of electric motors and generators. While Faraday’s genius lay in discovery by experiment, Maxwell produced theoretical breakthroughs of even greater note. Maxwell's famous equations, devised in 1864, uses mathematics to explain the interactions between electric and magnetic fields. His work demonstrated the principles behind electromagnetic waves, created when electric and magnetic fields oscillate simultaneously. Maxwell realized that light was a form of electromagnetic energy, but he also thought that the complete electromagnetic spectrum must include many other forms of waves as well. With the discovery of radio waves by German physicist Heinrich Hertz in 1888 and X rays by German physicist Wilhelm Roentgen in 1895, Maxwell’s ideas were proved correct. In 1897 British physicist Sir Joseph J. Thomson discovered the electron, a subatomic particle with a negative charge. This discovery countered the long-head notion that atoms were the basic unit of matter.
As in chemistry, these 19th-century discoveries in physics proved to have immense practical value. No one was more adept at harnessing them than American physicist and prolific inventor Thomas Edison. Working from his laboratories in Menlo Park, New Jersey, Edison devised the carbon-granule microphone in 1877, which greatly improved the recently invented telephone. He also invented the phonograph, the electric light bulb, several kinds of batteries, and the electric metre. Edison was granted more than 1,000 patents for electrical devices, a phenomenal feat for a man who had no formal schooling.
In the earth sciences, the 19th century was a time of controversy, with scientists debating Earth's age. Estimates ranged from less than 100,000 years to several hundred million years. In astronomy, greatly improved optical instruments enabled important discoveries to be made. The first observation of an asteroid, Ceres, took place in 1801. Astronomers had long noticed that Uranus exhibited an unusual orbit. French astronomer Urbain Jean Joseph Leverrier predicted that another planet nearby caused Uranus’s odd orbit. Using mathematical calculations, he narrowed down where such a planet would be located in the sky. In 1846, with the help of German astronomer Johann Galle, Leverrier discovered Neptune. The Irish astronomer William Parsons, the third Earl of Rosse, became the first person to see the spiral form of galaxies beyond our own solar system. He did this with the Leviathan, a 183-cm. (72-in.) reflecting telescope, built on the grounds of his estate in Parsonstown (now Birr), Ireland, in the 1840s. His observations were hampered by Ireland's damp and cloudy climate, but his gigantic telescope remained the world's largest for more than 70 years.
In the 19th century the study of microorganisms became increasingly important, particularly after French biologist Louis Pasteur revolutionized medicine by correctly deducing that some microorganisms are involved in disease. In the 1880's Pasteur devised methods of immunizing people against diseases by deliberately treating them with weakened forms of the disease-causing organisms themselves. Pasteur’s vaccine against rabies was a milestone in the field of immunization, one of the most effective forms of preventive medicine the world has yet seen. In the area of industrial science, Pasteur invented the process of pasteurization to help prevent the spread of disease through milk and other foods.
Also during the 19th century, the Austrian monk Gregor Mendel laid the foundations of genetics, although his work, published in 1866, was not recognized until after the century had closed. However, the British scientist Charles Darwin towers above all other scientists of the 19th century. His publication of On the Origin of Species in 1859 marked a major turning point for both biology and human thought. His theory of evolution by natural selection (independently and simultaneously developed by British naturalist Alfred Russel Wallace) initiated a violent controversy that still has not subsided. Particularly controversial was Darwin’s theory that humans resulted from a long process of biological evolution from apelike ancestors. The greatest opposition to Darwin’s ideas came from those who believed that the Bible was an exact and literal statement of the origin of the world and of humans. Although the public initially castigated Darwin’s ideas, by the late 1800s most biologists had accepted that evolution occurred, although not all agreed on the mechanism, known as natural selection, that Darwin proposed.
In the 20th century, scientists achieved spectacular advances in the fields of genetics, medicine, social sciences, technology, and physics.
At the beginning of the 20th century, the life sciences entered a period of rapid progress. Mendel's work in genetics was rediscovered in 1900, and by 1910 biologists had become convinced that genes are located in chromosomes, the threadlike structures that contain proteins and deoxyribonucleic acid (DNA). During the 1940s American biochemists discovered that DNA taken from one kind of bacterium could influence the characteristics of another. From these experiments, DNA is clearly the chemical that makes up genes and thus the key to heredity.
After American biochemist James Watson and British biophysicist Francis Crick established the structure of DNA in 1953, geneticists became able to understand heredity in chemical terms. Since then, progress in this field has been astounding. Scientists have identified the complete genome, or genetic catalogue, of the human body. In many cases, scientists now know how individual genes become activated and what affects they have in the human body. Genes can now be transferred from one species to another, side stepping the normal processes of heredity and creating hybrid organisms that are unknown in the natural world.
At the turn of the 20th century, Dutch physician Christian Eijkman showed that disease can be caused not only by microorganisms but by a dietary deficiency of certain substances now called vitamins. In 1909 German bacteriologist Paul Ehrlich introduced the world's first bactericide, a chemical designed to kill specific kinds of bacteria without killing the patient's cells as well. Following the discovery of penicillin in 1928 by British bacteriologist Sir Alexander Fleming, antibiotics joined medicine’s chemical armoury, making the fight against bacterial infection almost a routine matter. Antibiotics cannot act against viruses, but vaccines have been used to great effect to prevent some of the deadliest viral diseases. Smallpox, once a worldwide killer, was completely eradicated by the late 1970's, and in the United States the number of polio cases dropped from 38,000 in the 1950's to less than 10 a year by the 21st century.
By the middle of the 20th century scientists believed they were well on the way to treating, preventing, or eradicating many of the most deadly infectious diseases that had plagued humankind for centuries. By any whimpering gait, by ways of an operative measure, the 1980s contributed the medical community’s confidence in its ability to control infectious diseases had been shaken by the emergence of new types of disease-causing microorganisms. New cases of tuberculosis developed, caused by bacteria strains that were resistant to antibiotics. New, deadly infections for which there was no known cure also appeared, including the viruses that cause hemorrhagic fever and the human immunodeficiency virus (HIV), the cause of acquired immunodeficiency syndrome.
In other fields of medicine, the diagnosis of disease has been revolutionized by the use of new imaging techniques, including magnetic resonance imaging and computed tomography. Scientists were also on the verge of success in curing some diseases using gene therapy, in which the insertions of normal or genetically altered genes into a patient’s cells replace nonfunctional or missing genes.
Improved drugs and new tools have made surgical operations that were once considered impossible now routine. For instance, drugs that suppress the immune system enable the transplant of organs or tissues with a reduced risk of rejection. Endoscopy permits the diagnosis and surgical treatment of a wide variety of ailments using minimally invasive surgery. Advances in high-speed fibreoptic connections permit surgery on a patient using robotic instruments controlled by surgeons at another location. Known as telemedicine, this form of medicine makes it possible for skilled physicians to treat patients in remote locations or places that lack medical help.
In the 20th century the social sciences emerged from relative obscurity to become prominent fields of research. Austrian physician Sigmund Freud founded the practice of psychoanalysis, creating a revolution in psychology that led him to be called the ‘Copernicus of the mind.’ In 1948 the American biologist Alfred Kinsey published Sexual Behaviour in the Human Male, which proved to be one of the best-selling scientific works of all time. Although criticized for his methodology and conclusions, Kinsey succeeded in making human sexuality an acceptable subject for scientific research.
The 20th century also brought dramatic discoveries in the field of anthropology, with new fossil finds helping to piece together the story of human evolution. A completely new and surprising source of anthropological information became available from studies of the DNA in mitochondria, cell structures that provide energy to fuel the cell’s activities. Mitochondrial DNA has been used to track certain genetic diseases and to trace the ancestry of a variety of organisms, including humans.
In the field of communications, Italian electrical engineer Guglielmo Marconi sent his first radio signal across the Atlantic Ocean in 1901. American inventor Lee De Forest invented the triode, or vacuum tube, in 1906. The triode eventually became a key component in nearly all early radio, radar, television, and computer systems. In 1920 Scottish engineer John Logie Baird developed the Baird Televisor, a primitive television that provided the first transmission of a recognizable moving image. In the 1920's and 1930's American electronic engineer Vladimir Kosma Zworykin significantly improved the television’s picture and reception. In 1935 British physicist Sir Robert Watson-Watt used reflected radio waves to locate aircraft in flight. Radar signals have since been reflected from the Moon, planets, and stars to learn their distance from Earth and to track their movements.
In 1947 American physicists John Bardeen, Walter Brattain, and William Shockley invented the transistor, an electronic device used to control or amplify an electrical current. Transistors are much smaller, far less expensive, require less power to operate, and are considerably more reliable than triodes. Since their first commercial use in hearing aids in 1952, transistors have replaced triodes in virtually all applications.
During the 1950's and early 1960's minicomputers were developed using transistors rather than triodes. Earlier computers, such as the electronic numerical integrator and computer (ENIAC), first introduced in 1946 by American physicist John W. Mauchly and American electrical engineer John Presper Eckert, Jr., used as many as 18,000 triodes and filled a large room. However, the transistor initiated a trend toward microminiaturization, in which individual electronic circuits can be reduced to microscopic size. This drastically reduced the computer's size, cost, and power requirements and eventually enabled the development of electronic circuits with processing speeds measured in billionths of a second.
Further miniaturization led in 1971 to the first microprocessor-a computer on a chip. When combined with other specialized chips, the microprocessor becomes the central arithmetic and logic unit of a computer smaller than a portable typewriter. With their small size and a price less than that of a used car, today’s personal computers are many times more powerful than the physically huge, multimillion-dollar computers of the 1950’s. Once used only by large businesses, computers are now used by professionals, small retailers, and students to perform a wide variety of everyday tasks, such as keeping data on clients, tracking budgets, and writing school reports. People also use computers to speak with worldwide communications networks, such as the Internet and the World Wide Web, to send and receive E-mail, to shop, or to find information on just about any subject.
During the early 1950's public interest in space exploration developed. The focal event that opened the space age was the International Geophysical Year from July 1957 to December 1958, during which hundreds of scientists around the world coordinated their efforts to measure the Earth’s near-space environment. As part of this study, both the United States and the Soviet Union announced that they would launch artificial satellites into orbit for nonmilitary space activities.
When the Soviet Union launched the first Sputnik satellite in 1957, the feat spurred the United States to intensify its own space exploration efforts. In 1958 the National Aeronautics and Space Administration (NASA) was founded for the purpose of developing human spaceflight. Throughout the 1960s NASA experienced its greatest growth. Among its achievements, NASA designed, manufactured, tested, and eventually used the Saturn rocket and the Apollo spacecraft for the first manned landing on the Moon in 1969. In the 1960s and 1970's, NASA also developed the first robotic space probes to explore the planet’s Mercury, Venus, and Mars. The success of the Mariner probes paved the way for the unmanned exploration of the outer planets in Earth’s solar system.
In the 1970's through 1990's, NASA focussed its space exploration efforts on a reusable space shuttle, which was first deployed in 1981. In 1998 the space shuttle, along with its Russian counterpart known as Soyuz, became the workhorses that enabled the construction of the International Space Station.
Unlike the laws of classical physics, quantum theory deals with events that occur on the smallest of scales. Quantum theory explains how subatomic particles form atoms, and how atoms interact when they combine to form chemical compounds. Quantum theory deals with a world where the attributes of any single particle can never be completely known-an idea known as the uncertainty principle, put forward by the German physicist Werner Heisenberg in 1927. Nevertheless, while there is uncertainty on the subatomic level, quantum physics successfully predicts the overall outcome of subatomic events, a fact that firmly relates it to the macroscopic world-that is, the one in which we live.
In 1934 Italian-born American physicist Enrico Fermi began a series of experiments in which he used neutrons (subatomic particles without an electric charge) to bombard atoms of various elements, including uranium. The neutrons combined with the nuclei of the uranium atoms to produce what he thought were elements heavier than uranium, known as transuranium elements. In 1939 other scientists demonstrated that in these experiments’ Fermi had not formed heavier elements, but instead had achieved the splitting, or fission, of the uranium atom's nucleus. These early experiments led to the development of fission as both an energy source and a weapon.
These fission studies, coupled with the development of particle accelerators in the 1950's, initiated a long and remarkable journey into the nature of subatomic particles that continues today. Far from being indivisible, scientists now know that atoms are made up of 12 fundamental particles known as quarks and leptons, which combine in different ways to make all the kinds of matter currently known.
Advances in particle physics have been closely linked to progress in cosmology. From the 1920's onward, when the American astronomer Edwin Hubble showed that the universe is expanding, cosmologists have sought to rewind the clock and establish how the universe began. Today, most scientists believe that the universe started with a cosmic explosion some time between 10 and 20 billion years ago. However, the exact sequence of events surrounding its birth, and its ultimate fate, are still matters of ongoing debate.
Particle Accelerators, in physics, are the devices used to accelerate charged elementary particles or ions to high energies. Particle accelerators today are some of the largest and most expensive instruments used by physicists. They all have the same three basic parts: a source of elementary particles or ions, a tube pumped to a partial vacuum in which the particles can travel freely, and some means of speeding up the particles.
Charged particles can be accelerated by an electrostatic field. For example, by placing electrodes with a large potential difference at each end of an evacuated tube, British scientists’ John D. Cockcroft and Ernest Thomas Sinton Walton were able to accelerate protons to 250,000 eV. Another electrostatic accelerator is the Van de Graaff accelerator, which was developed in the early 1930's by the American physicist Robert Jemison Van de Graaff. This accelerator uses the same principles as the Van de Graaff Generator. The Van de Graaff accelerator builds up a potential between two electrodes by transporting charges on a moving belt. Modern Van de Graaff accelerators can accelerate particles to energies as high as 15 MeV (15 million electron volts).
Another machine, first conceived in the late 1920's, is the linear accelerator, or linac, which uses alternating voltages of high magnitude to push particles along in a straight line. Particles pass through a line of hollow metal tubes enclosed in an evacuated cylinder. An alternating voltage is timed so that a particle is pushed forward each time it goes through a gap between two of the metal tubes. Theoretically, a linac of any energy can be built. The largest linac in the world, at Stanford University, is 3.2 km. (2 mi.) long. It is capable of accelerating electrons to an energy of 50 GeV (50 billion, or giga, electron volts). Stanford's linac is designed to collide two beams of particles accelerated on different tracks of the accelerator.
The American physicist Ernest O. Lawrence won the 1939 Nobel Prize in physics for a breakthrough in accelerator design in the early 1930's. He developed the cyclotron, the first circular accelerator. A cyclotron is to some extent like a linac wrapped into a tight spiral. Instead of many tubes, the machine had only two hollow vacuum chambers, called dees, that are shaped like capital letter Ds back to back. A magnetic field, produced by a powerful electromagnet, keeps the particles moving in a circle. Each time the charged particles pass through the gap between the dees, they are accelerated. As the particles gain energy, they spiral out toward the edge of the accelerator until they gain enough energy to exit the accelerator. The world's most powerful cyclotron, the K1200, began operating in 1988 at the National Superconducting Cyclotron Laboratory at Michigan State University. The machine is capable of accelerating nuclei to an energy approaching 8 GeV.
When nuclear particles in a cyclotron gain an energy of 20 MeV or more, they become appreciably more massive, as predicted by the theory of relativity. This tends to slow them and throws the acceleration pulses at the gaps between the dees out of phase. A solution to this problem was suggested in 1945 by the Soviet physicist Vladimir I. Veksler and the American physicist Edwin M. McMillan. The solution, the synchrocyclotron, is sometimes called the frequency-modulated cyclotron. In this instrument, the oscillator (radio-frequency generator) that accelerates the particles around the dees is automatically adjusted to stay in step with the accelerated particles; as the particles gain mass, the frequency of accelerations is lowered slightly to keep in step with them. As the maximum energy of a synchrocyclotron increases, so must its size, for the particles must have more space in which to spiral. The largest synchrocyclotron is the 600-cm. (236-in.) phasotron at the Dubna Joint Institute for Nuclear Research in Russia; it accelerates protons to more than 700 MeV and has magnets weighing 6984 metric tons (7200 tons).
When electrons are accelerated, they undergo a large increase in mass at a low energy. At 1 MeV energy, an electron weighs two and one-half times as much as an electron at rest. Synchrocyclotrons cannot be adapted to make allowance for such large increases in mass. Therefore, another type of cyclic accelerator, the betatron, is employed to accelerate electrons. The betatron consists of a doughnut-shaped evacuated chamber placed between the poles of an electromagnet. The electrons are kept in a circular path by a magnetic field called a guide field. By applying an alternating current to the electromagnet, the electromotive force induced by the changing magnetic flux through the circular orbit accelerates the electrons. During operation, both the guide field and the magnetic flux are varied to keep the radius of the orbit of the electrons constant.
The synchrotron is the most recent and most powerful member of the accelerator family. A synchrotron consists of a tube in the shape of a large ring through which the particles travel; the tube is surrounded by magnets that keep the particles moving through the centre of the tube. The particles enter the tube after having already been accelerated to several million electron volts. Particles are accelerated at one or more points on the ring each time the particles make a complete circle around the accelerator. To keep the particles in a rigid orbit, the strengths of the magnets in the ring are increased as the particles gain energy. In a few seconds, the particles reach energies greater than 1 GeV and are ejected, either directly into experiments or toward targets that produce a variety of elementary particles when struck by the accelerated particles. The synchrotron principle can be applied to either protons or electrons, although most of the large machines are proton-synchrotrons.
The first accelerator to exceed the 1 GeV mark was the cosmotron, a proton-synchrotron at Brookhaven National Laboratory, in Brookhaven, New York. The cosmotron was operated at 2.3 GeV in 1952 and later increased to 3 GeV. In the mid-1960's, two operating synchrotrons were regularly accelerating protons to energies of about 30 GeV. These were the Alternating Gradient Synchrotron at Brookhaven National Laboratory, and a similar machine near Geneva, Switzerland, operated by CERN (also known as the European Organization for Nuclear Research). By the early 1980s, the two largest proton-synchrotrons were a 500-GeV device at CERN and a similar one at the Fermi National Accelerator Laboratory (Fermilab) near Batavia, Illinois. The capacity of the latter, called Tevatron, was increased to a potential 1 TeV (trillion, or tera, eV) in 1983 by installing superconducting magnets, making it the most powerful accelerator in the world. In 1989, CERN began operating the Large-Electron Positron Collider (LEP), a 27-km. (16.7-mi.) rings that can accelerate electrons and positrons to an energy of 50 GeV.
A storage ring collider accelerator is a synchrotron that produces more energetic collisions between particles than a conventional synchrotron, which slams accelerated particles into a stationary target. A storage ring collider accelerates two sets of particles that rotate in opposite directions in the ring, then collides the two set of particles. CERN's Large Electron-Positron Collider is a storage ring collider. In 1987, Fermilab converted the Tevatron into a storage ring collider and installed a three-story-high detector that observed and measured the products of the head-on particle collisions.
As powerful as today's storage ring colliders are, physicists need even more powerful devices to test today's theories. Unfortunately, building larger rings is extremely expensive. CERN is considering building the Large Hadron Collider (LHC) in the existing 27-km. (16.7-mi.) tunnel that currently houses the Large Electron-Positron Collider. In 1988, the United States began planning for the construction of the Superconducting Super Collider (SSC) near Waxahachie, Texas. The SSC was to be an enormous storage ring collider accelerator 87 km. (54 mi.) long. However, after about one-fifth of the tunnel had been completed, the Congress of the United States voted to cancel the project in October 1993, as a result of the accelerator's projected cost of more than $10 billion.
Accelerators are used to explore atomic nuclei, thereby allowing nuclear scientists to identify new elements and to explain phenomena that affect the entire nucleus. Machines exceeding 1 GeV are used to study the fundamental particles that compose the nucleus. Several hundred of these particles have been identified. High-energy physicists hope to discover rules or principles that will permit an orderly arrangement of the proportion of sub-nuclear particles. Such an arrangement would be as useful to nuclear science as the periodic table of the chemical elements is to chemistry. Fermilab's accelerator and collider detector permit scientists to study violent particle collisions that mimic the state of the universe when it was just microseconds old. Continued study of their findings should increase scientific understanding of the makeup of the universe.
In addition, Particle Detectors, are described as instruments used to detect and study fundamental nuclear particles, as these detectors range in complexity from the well-known portable Geiger counter to room-sized spark and bubble chambers.
One of the first detectors to be used in nuclear physics was the ionization chamber, which consists essentially of a closed vessel containing a gas and equipped with two electrodes at different electrical potentials. The electrodes, depending on the type of instrument, may consist of parallel plates or coaxial cylinders, or the walls of the chamber may act as one electrode and a wire or rod inside the chamber act as the other. When ionizing particles of radiation enter the chamber they ionize the gas between the electrodes. The ions that are thus produced migrate to the electrodes of opposite sign (negatively charged ions move toward the positive electrode, and vice versa), creating a current that may be amplified and measured directly with an electrometer-an electroscope equipped with a scale-or amplified and recorded by means of electronic circuits.
Ionization chambers adapted to detect individual ionizing particles of radiation are called counters. The Geiger-Müller counter is one of the most versatile and widely used instruments of this type. It was developed by the German physicist Hans Geiger from an instrument first devised by Geiger and the British physicist Ernest Rutherford; it was improved in 1928 by Geiger and by the German American physicist Walther Müller. The counting tube is filled with a gas or a mixture of gases at low pressure, the electrodes being the thin metal wall of the tube and a fine wire, usually made of tungsten, stretched lengthwise along the axis of the tube. A strong electric field maintained between the electrodes accelerates the ions; these then collide with atoms of the gas, detaching electrons and thus producing more ions. When the voltage was raised sufficiently, the rapidly increasing current produced by a single particle sets off a discharge throughout the counter. The pulse caused by each particle is amplified electronically and then actuates a loudspeaker or a mechanical or electronic counting device.
Detectors that enable researchers to observe the tracks that particles leave behind are called track detectors. Spark and bubble chambers are track detectors, as are the cloud chamber and nuclear emulsions. Nuclear emulsions resemble photographic emulsions but are thicker and not as sensitive to light. A charged particle passing through the emulsion ionizes silver grains along its track. These grains become black when the emulsion is developed and can be studied with a microscope.
The fundamental principle of the cloud chamber was discovered by the British physicist C. T. R. Wilson in 1896, although an actual instrument was not constructed until 1911. The cloud chamber consists of a vessel several centimetres or more in diameter, with a glass window on one side and a movable piston on the other. The piston can be dropped rapidly to expand the volume of the chamber. The chamber is usually filled with dust-free air saturated with water vapour. Dropping the piston causes the gas to expand rapidly and causes its temperature to fall. The air is now supersaturated with water vapour, but the excess vapour cannot condense unless ions are present. Charged nuclear or atomic particles produce such ions, and any such particles passing through the chamber leave behind them a trail of ionized particles upon which the excess water vapour will condense, thus making visible the course of the charged particle. These tracks can be photographed and the photographs then analysed to provide information on the characteristics of the particles.
Because the paths of electrically charged particles are bent or deflected by a magnetic field, and the amount of deflection depends on the energy of the particle, a cloud chamber is often operated within a magnetic field. The tracks of negatively and positively charged particles will curve in opposite directions. By measuring the radius of curvature of each track, its velocity can be determined. Heavy nuclei such as alpha particles form thick and dense tracks, protons form tracks of medium thickness, and electrons form thin and irregular tracks. In a later refinement of Wilson's design, called a diffusion cloud chamber, a permanent layer of supersaturated vapour is formed between warm and cold regions. The layer of supersaturated vapour is continuously sensitive to the passage of particles, and the diffusion cloud chamber does not require the expansion of a piston for its operation. Although the cloud chamber has now been supplanted almost entirely by the bubble chamber and the spark chamber, it was used in making many important discoveries in nuclear physics.
The bubble chamber, invented in 1952 by the American physicist Donald Glaser, is similar in operation to the cloud chamber. In a bubble chamber a liquid is momentarily superheated to a temperature just above its boiling point. For an instant the liquid will not boil unless some impurity or disturbance is introduced. High-energy particles provide such a disturbance. Tiny bubbles form along the tracks as these particles pass through the liquid. If a photograph is taken just after the particles have crossed the chamber, these bubbles will make visible the paths of the particles. As with the cloud chamber, a bubble chamber placed between the poles of a magnet can be used to measure the energies of the particles. Many bubble chambers are equipped with superconducting magnets instead of conventional magnets. Bubble chambers filled with liquid hydrogen allow the study of interactions between the accelerated particles and the hydrogen nuclei.
In a spark chamber, incoming high-energy particles ionize the air or a gas between plates and wire grids that are kept alternately positively and negatively charged. Sparks jump along the paths of ionization and can be photographed to show particle tracks. In some spark-chamber installations, information on particle tracks is fed directly into electronic computer circuits without the necessity of photography. A spark chamber can be operated quickly and selectively. The instrument can be set to record particle tracks only when a particle of the type that the researchers want to study is produced in a nuclear reaction. This advantage is important in studies of the rarer particles; spark-chamber pictures, however, lack the resolution and detail of bubble-chamber pictures.
The scintillation counter functions by the ionization produced by charged particles moving at high speed within certain transparent solids and liquids, known as scintillating materials, causing flashes of visible light. The gases’ argon, krypton, and xenon produces ultraviolet light, and hence are used in scintillation counters. A primitive scintillation device, known as the spinthariscopes, was invented in the early 1990s and was of considerable importance in the development of nuclear physics. The spinthariscopes required, however, the counting of the scintillations by eye. Because of the uncertainties of this method, physicists turned to other detectors, including the Geiger-Müller counter. The scintillation method was revived in 1947 by placing the scintillating material in front of a photo multiplier tube, a type of photoelectric cell. The light flashes are converted into electrical pulses that can be amplified and recorded electronically.
Various organic and inorganic substances such as plastic, zinc sulfide, sodium iodide, and anthracene are used as scintillating materials. Certain substances react more favourably to specific types of radiation than others, making possible highly diversified instruments. The scintillation counter is superior to all other radiation-detecting devices in a number of fields of current research. It has replaced the Geiger -Müller counter in the detection of biological tracers and as a surveying instrument in prospecting for radioactive ores. It is also used in nuclear research, notably in the investigation of such particles as the antiproton, the meson Elementary Particles, and the neutrino. One such counter, the Crystal Ball, has been in use since 1979 for advanced particle research, first at the Stanford Linear Accelerator Centre and, since 1982, at the German Electron Synchrotron Laboratory (DESY) in Hamburg, Germany. The Crystal Ball is a hollow crystal sphere, about 2.1 m. (7 ft.) wide, that is surrounded by 730 sodium iodide crystals.
Many other types of interactions between matter and elementary particles are used in detectors. Thus in semiconductor detectors, electron-hole pairs that elementary particles produce in a semiconductor junction momentarily increase the electric conduction across the junction. The Cherenkov detector, on the other hand, makes use of the effect discovered by the Russian physicist Pavel Alekseyevich Cherenkov in 1934: A particle emits light when it passes through a nonconducting medium at a velocity higher than the velocity of light in that medium (the velocity of light in glass, for example, is lower than the velocity of light in vacuum). In Cherenkov detectors, materials such as glass, plastic, water, or carbon dioxide serve as the medium in which the light flashes are produced. As in scintillation counters, the light flashes are detected with photo multiplier tubes.
Neutral particles such as neutrons or neutrinos can be detected by nuclear reactions that occur when they collide with nuclei of certain atoms. Slow neutrons produce easily detectable alpha particles when they collide with boron nuclei in borontrifluoride. Neutrinos, which barely interact with matter, are detected in huge tanks containing perchloroethylene (C2CI4, a dry-cleaning fluid). The neutrinos that collide with chlorine nuclei produce radioactive argon nuclei. The perchlorethylene tank is flushed at regular intervals, and the newly formed argon atoms, presents in minute amounts, is counted. This type of neutrino detector, placed deep underground to shield against cosmic radiation, is currently used to measure the neutrino flux from the sun. Neutrino detectors may also take the form of scintillation counters, the tank in this case being filled with an organic liquid that emits light flashes when traversed by electrically charged particles produced by the interaction of neutrinos with the liquid's molecules.
The detectors now being developed for use with the storage rings and colliding particle beams of the most recent generation of accelerators are bubble-chamber types known as time-projection chambers. They can measure three-dimensionally the tracks produced by particles from colliding beams, with supplementary detectors to record other particles resulting from the high-power collisions. The Fermi National Accelerator Laboratory's CDF (Collision Detector Fermilab) is used with its colliding-beam accelerator to study head-on particle collisions. CDF's three different systems can capture or account for nearly all of the sub-nuclear fragments released in such violent collisions.
High-energy particle physicists are using particle accelerators measuring 8 km. (5 mi.) across to study something billions of times too small to see. Why? To find out what everything is made of and where it comes from. These physicists are constructing and testing new theories about objects called superstrings. Superstrings may explain the nature of space and time and of everything in them, from the light you are using to read these words to black holes so dense that they can capture light forever. Possibly the smallest objects allowed by the laws of physics, superstrings may tell us about the largest event of all time: the big bang, and the creation of the universe!
These are exciting ideas, still strange to most people. For the past 100 years physicists have descended to deeper and deeper levels of structure, into the heart of matter and energy and of existence itself. Read on to follow their progress.
The world around us, full of books, computers, mountains, lakes, and people, is made by rearranging more than 100 chemical elements. Oxygen, hydrogen, carbon, and nitrogen are elements especially important to living things; silicon is especially important to computer chips.
The smallest recognizable form in which a chemical element occurs is the atom, and the atoms of one element are unlike the atoms of any other element. Every atom has a small core called a nucleus around which electrons swarm. Electrons, tiny particles with a negative electrical charge, determine the chemical properties of an element-that is, how it interacts with other atoms to make the things around us. Electrons also are what move through wires to make light, heat, and video games.
In 1869, before anyone knew anything about nuclei or electrons, Russian chemist Dmitry Mendeleyev grouped the elements according to their physical qualities and discovered the periodic law. He was able to predict the qualities of elements that had not yet been discovered. By the early 1900s scientists had discovered the nucleus and electrons.
Atoms stick together and form larger objects called molecules because of a force called electromagnetism. The best-known form of electromagnetism is radiation: light, radio waves, X rays, and infrared and ultraviolet radiation.
Modern physics starts with light and other forms of electromagnetic radiation. In 1900 German physicist Max Planck proposed the quantum theory, which says that light comes in units of energy called quanta. As we will explain, these units of light are waves and they are also particles. Light is simultaneously energy and matter. So is everything else.
It was Albert Einstein who first proposed (in 1905) that Planck's units of light can be considered particles. He named these particles photons. In the same year, Einstein published what is known as the special theory of relativity. According to this theory, the speed of light is the fastest that anything in the universe can go, and all forms of electromagnetic radiation are forms of light, moving at the same speed.
What differentiates radio waves, visible light, and X ray is their energy. This energy is directly related to the wave’s length. Light waves, like ocean waves, have peaks and troughs that repeat at regular intervals, and wavelength is the distance between each pair of peaks (or troughs). The shorter the wavelength, the higher the energy.
How does this relate to our story? It turns out that the process by which electrons interact is an exchange of photons (particles of light). Therefore we can study electrons by probing them with photons.
To understand really what things are made of, we must probe them or move them around and thus learn how they work. In the case of electrons, physicists probe them with photons, the particles that carry the electromagnetic force.
While some physicists studied electrons and photons, others pondered and probed the atomic nucleus. The nucleus of each chemical element contains a distinctive number of positively charged protons and a number of uncharged neutrons that can vary slightly from atom to atom. Protons and neutrons are the source of radioactivity and of nuclear energy. In 1964 physicists suggested that protons and neutrons are made of still smaller particles they called quarks.
Probing protons and neutrons requires particles with extremely high energies. Particle accelerators are large machines for bringing particles to these high energies. These machines have to be big, because they accelerate particles by applying force many times, over long distances. Some particle accelerators are the largest machines ever constructed. This is ironic given that these are delicate scientific instruments designed to probe the shortest distances ever investigated.
The proposal and acceptance of quarks were a major step in putting together what is called the standard model of particles and forces. This unified theory describes all of the fundamental particles, from which everything is made, and how they interact. There are twelve kinds of fundamental particles: six kinds of quarks and six kinds of leptons, including the electron.
Four forces are believed to control all the interactions of these fundamental particles. They are the strong force, which holds the nucleus together; the weak force, responsible for radioactivity; the electromagnetic force, which provides electric charge and binds electrons to atomic nuclei; and gravitation, which holds us on Earth. The standard model identifies a force-carrying particle to correspond with three of these forces. The photon, for example, carries the electromagnetic force. Physicists have not yet detected a particle that carries gravitation.
Powerful mathematical techniques called gauge field theories allow physicists to describe, calculate, and predict the interactions of these particles and forces. Gauge theories combine quantum physics and special relativity into consistent equations that produce extremely accurate results. The extraordinary precision of quantum electrodynamics, for example, has filled our world with ultrareliable lasers and transistors.
The mathematical rules that come together in the standard model can explain every particle physics phenomenon that we have ever seen. Physicists can explain forces; they can explain particles. However, they cannot yet explain why forces and particles are what they are. Basic properties, such as the speed of light, must be taken from measurements. Physicists cannot yet provide a satisfactory description of gravity.
The basic behaviour of gravity was taught to us by English physicist Sir Isaac Newton. After creating the basics of quantum physics in his theory of special relativity, Albert Einstein in 1915 clarified and extended Newton’s explanation with his own description of gravity, known as general relativity. Not even Einstein, however, could bring the two theories of relativity into a single unified field theory. Since everything else is governed by quantum physics on small scales, what is the quantum theory of gravity? No one has yet proposed a satisfactory answer to this question. Physicists have been trying to find one for a long time.
At first, this might not seem to be an important problem. Compared with other forces, gravity is extremely weak. We are aware of its action in everyday life because its pull corresponds to mass, and Earth has a huge amount of mass and hence a big gravitational pull. Fundamental particles have tiny masses and hence a minuscule gravitational pull. So couldn’t we just ignore gravity when studying fundamental particles? The ability to ignore gravity on this scale is why we have made so much progress in particle physics over so many years without possessing a theory of quantum gravity.
There are several reasons, however, why we cannot ignore gravity forever. One reason is simply that scientists want to know the whole story. A second reason is that gravity, as Einstein taught us, is the essential physics of space and time. If this physics is not subject to the same quantum laws that any other physics is subject to, something is wrong somewhere. A third reason is that an understanding of quantum gravity is necessary to deal with some important questions in cosmology-for example, how did the universe get to be the way it is, and why did galaxies form?
Gravitation has been shown to spread in waves, and physicists theorize the existence of a corresponding particle, the graviton. The force of gravity, like everything else, has a natural quantum length. For gravity it is about 10-31 m. This is about a million billion times smaller than a proton.
We can't build an accelerator to probe that distance using today’s technology, because the proportions of size and energy show that it would stretch from here to the stars. However, we know that the universe began with the big bang, when all matter and force originated. Everything we know about today follows from the period after the big bang, when the universe expanded. Everything we know indicates that in the fractions of a second following the big bang, the universe was extremely small and dense. At some earliest time, the entire universe was no larger across than the quantum length of gravity. If we are to understand the true nature of where everything comes from and how it really fits together, we must understand quantum gravity.
These questions may seem almost metaphysical. Physicists now suspect that research in this direction will answer many other questions about the standard model-such as why are there are so many different fundamental particles. Other questions are more immediately practical. Our control of technology arises from our understanding of particles and forces. Answers to physicists’ questions could increase computing power or help us find new sources of energy. They will shape the 21st century as quantum physics has shaped the 20th.
Among the most promising new theories is the idea that everything is made of fundamental ‘strings,’ rather than of another layer of tiny particles. The best analogy for these minute entities is a guitar or violin string, which vibrates to produce notes of different frequencies and wavelengths. Superstring theory proposes that if we were able to look closely enough at a fundamental particle-at quantum-length distances-we would see a tiny, vibrating loop!
In this view, all the different types of fundamental particles that we find in the standard model are really just different vibrations of the same string, which can split and join in ways that change its evident nature. This is the case not only for particles of matter, such as quarks and electrons, but also for force-carrying particles, such as photons.
This is a very clever idea, since it unifies everything we have learned in a simple way. In its details, the theory is extremely complicated but very promising. For example, the superstring theory very naturally describes the graviton among its vibrations, and it also explains the quantum properties of many types of black holes. There are also signs that the quantum length of gravity is really the smallest physically possible distance. Below this scale, points in space and time are no longer connected in sequence, so distances cannot be measured or described. The very notions of space, time, and distance seem to stop making sense.
Recent discoveries have shown that the five leading versions of superstring theory are all contained within a powerful complex known as M-Theory. M-Theory says that entities mathematically resembling membranes and other extended objects may also be important. The end of the story has not yet been written, however. Physicists are still working out the details, and it will take many years to be confident that this approach is correct and comprehensive. Much remains to be learned, and surprises are guaranteed. In the quest to probe these small distances, experimentally and theoretically, our understanding of nature is forever enriched, and we approach at least a part of ultimate truth.
Elementary Particles, in physics, are particles that cannot be broken down into any other particles. The term elementary particles also are used more loosely to include some subatomic particles that are composed of other particles. Particles that cannot be broken further are sometimes called fundamental particles to avoid confusion. These fundamental particles provide the basic units that make up all matter and energy in the universe.
Scientists and philosophers have sought to identify and study elementary particles since ancient times. Aristotle and other ancient Greek philosophers believed that all things were composed of four elementary materials: fire, water, air, and earth. People in other ancient cultures developed similar notions of basic substances. As early scientists began collecting and analysing information about the world, they showed that these materials were not fundamental but were made of other substances.
In the 1800s British physicist John Dalton was so sure he had identified the most basic objects that he called them atoms (from the Greek word for ‘indivisible’). By the early 1900s scientists were able to break apart these atoms into particles that they called the electron and the nucleus. Electrons surround the dense nucleus of an atom. In the 1930s, researchers showed that the nucleus consists of smaller particles, called the proton and the neutron. Today, scientists have evidence that the proton and neutron are themselves made up of even smaller particles, called quarks.
Scientists now believe that quarks and three other types of particles-leptons, force-carrying bosons, and the Higgs boson-are truly fundamental and cannot be split into anything smaller. In the 1960s American physicists Steven Weinberg and Sheldon Glashow and Pakistani physicist Abdus Salam developed a mathematical description of the nature and behaviour of elementary particles. Their theory, known as the standard model of particle physics, has greatly advanced understanding of the fundamental particles and forces in the universe. Yet some questions about particles remain unanswered by the standard model, and physicists continue to work toward a theory that would explain even more about particles.
Everything in the universe, from elementary particles and atoms to people, houses, and planets, can be classified into one of two categories: fermions (pronounced FUR-me-onz) or bosons (pronounced BO-zonz). The behaviour of a particle or group of particles, such as an atom or a house, determines whether it is a fermion or boson. The distinction between these two categories is not noticeable on the large scale of people or houses, but it has profound implications in the world of atoms and elementary particles. Fundamental particles are classified according to whether they are fermions or bosons. Fundamental fermions combine to form atoms and other more unusual particles, while fundamental bosons carry forces between particles and give particles mass.
In 1925 Austrian-born American physicist Wolfgang Pauli formulated a rule of physics that helped define fermions. He suggested that no two electrons can have the same properties and locations. He proposed this exclusion principle to explain why all of the electrons in atoms have different amounts of energy. In 1926 Italian-born American physicist Enrico Fermi and British physicist Paul Dirac developed equations that describe electron behaviour, providing mathematical proof of the exclusion principle. Physicists call particles that obey the exclusion principle fermions in honour of Fermi. Protons, neutrons, and the quarks that comprise them are all examples of fermions.
Some particles, such as particles of light called photons, do not obey the exclusion principle. Two or more photons can have the same characteristics. In 1925 German-born American physicist Albert Einstein and Indian mathematician Satyendra Bose developed a set of equations describing the behaviour of particles that do not obey the exclusion principle. Particles that obey the equations of Bose and Einstein are called bosons, in honour of Bose.
Classifying particles as either fermions or bosons are similar to classifying whole numbers as either odd or even. No number is both odd and even, yet every whole number is either odd or even. Similarly, particles are either fermions or bosons. Sums of odd and even numbers are either odd or even, depending on how many odd numbers were added. Adding two odd numbers yields an even number, but adding a third odd number makes the sum odd again. Adding any number of even numbers yields an even sum. In a similar manner, adding an even number of fermions yield a boson, while adding an odd number of fermions results in a fermion. Adding any number of bosons yields a boson.
For example, a hydrogen atom contains two fermions: an electron and a proton. Yet the atom itself is a boson because it contains an even number of fermions. According to the exclusion principle, the electron inside the hydrogen atom cannot have the same properties as another electron nearby. However, the hydrogen atom itself, as a boson, does not follow the exclusion principle. Thus, one hydrogen atom can be identical to another hydrogen atom.
A particle composed of three fermions, on the other hand, is a fermion. An atom of heavy hydrogen, also called a deuteron, is a hydrogen atom with a neutron added to the nucleus. A deuteron contains three fermions: one proton, one electron, and one neutron. Since the deuteron contains an odd number of fermions, it too is a fermion. Just like its constituent particles, the deuteron must obey the exclusion principle. It cannot have the same properties as another deuteron atom.
The differences between fermions and bosons have important implications. If electrons did not obey the exclusion principle, all electrons in an atom could have the same energy and be identical. If all of the electrons in an atom were identical, different elements would not have such different properties. For example, metals conduct electricity better than plastics do because the arrangement of the electrons in their atoms and molecules differs. If electrons were bosons, their arrangements could be identical in these atoms, and devices that rely on the conduction of electricity, such as televisions and computers, would not work. Photons, on the other hand, are bosons, so a group of photons can all have identical properties. This characteristic allows the photons to form a coherent beam of identical particles called a laser.
The most fundamental particles that make up matter fall into the fermion category. These fermions cannot be split into anything smaller. The particles that carry the forces acting on matter and antimatter is bosons called force carriers. Force carriers are also fundamental particles, so they cannot be split into anything smaller. These bosons carry the four basic forces in the universe: the electromagnetic, the gravitational, the strong (force that holds the nuclei of atoms together), and the weak (force that causes atoms radioactively to decay). Scientists believed another type of fundamental boson, called the Higgs boson, give matter and antimatter mass. Scientists have yet to discover definitive proof of the existence of the Higgs boson.
Ordinary matter makes up all the objects and materials familiar to life on Earth, including people, cars, buildings, mountains, air, and clouds. Stars, planets, and other celestial bodies also contain ordinary matter. The fundamental fermions that make up matter fall into two categories: leptons and quarks. Each lepton and quark has an antiparticle partner, with the same mass but opposite charge. Leptons and quarks differ from each other in two main ways: (1) the electric charge they carry and (2) the way they interact with each other and with other particles. Scientists usually state the electric charge of a particle as a multiple of the electric charge of a proton, which is 1.602 × 10-19 coulombs. Leptons have electric charges of either-1 or 0 (neutral), with their antiparticles having charges of +1 or 0. Quarks have electric charges of either +? or? Antiquarks have electric charges of either -? or +? . Leptons interact weakly with one another and with other particles, while quarks interact strongly with one another.
Leptons and quarks each come in 6 varieties. Scientists divided these 12 basic types into 3 groups, called generations. Each generation consists of 2 leptons and 2 quarks. All ordinary matter consists of just the first generation of particles. The particles in the second and third generation tend to be heavier than their counterparts in the first generation. These heavier, higher-generation particles decay, or spontaneously change, into their first generation counterparts. Most of these decays occur very quickly, and the particles in the higher generations exist for an extremely short time (a millionth of a second or less). Particle physicists are still trying to understand the role of the second and third generations in nature.
Scientists divide leptons into two groups: particles that have electric charges and particles, called neutrinos, that are electrically neutral. Each of the three generations contains a charged lepton and a neutrino. The first generation of leptons consists of the electron (e-) and the electron neutrino (ν? e); the second generation, the muon (µ) and the muon neutrino (ν? µ); and the third generation, the tau (t) and the tau neutrino (ν? t;).
The electron is probably the most familiar elementary particle. Electrons are about 2,000 times lighter than protons and have an electric charge of-1. They are stable, so they can exist independently (outside an atom) for an infinitely long time. All atoms contain electrons, and the behaviour of electrons in atoms distinguishes one type of atom from another. When atoms radioactively decay, they sometimes emit an electron in a process called beta decay.
Studies of beta decay led to the discovery of the electron neutrino, the first generation lepton with no electric charge. Atoms release neutrinos, along with electrons, when they undergo beta decay. Electron neutrinos might have a tiny mass, but their mass is so small that scientists have not been able to measure it or conclusively confirm that the particles have any mass at all.
Physicists discovered a particle heavier than the electron but lighter than a proton in studies of high-energy particles created in Earth’s atmosphere. This particle, called the muon (pronounced MYOO-on), is the second generation charged lepton. Muons have an electric charge of -1 and an average lifetime of 1.52 microseconds (a microsecond is one-millionth of a second). Unlike electrons, they do not make up everyday matter. Muons live their brief lives in the atmosphere, where heavier particles called pions decay into Muons and other particles. The electrically neutral partner of the muon is the muon neutrino. Muon neutrinos, like electron neutrinos, have either a tiny mass too small to measure or no mass at all. They are released when a muon decays.
The third generation charged lepton is the tau. The tau has an electric charge of-1 and almost twice the mass of a proton. Scientists have detected taus only in laboratory experiments. The average lifetime of taus is extremely short-only 0.3 picoseconds (a picosecond is one-trillionth of a second). Scientists believe the tau has an electrically neutral partner called the tau neutrino. While scientists have never detected a tau neutrino directly, they believe they have seen the effects of tau neutrinos during experiments. Like the other neutrinos, the tau neutrino has a very small mass or no mass at all.
The fundamental particles that make up protons and neutrons are called quarks. Like leptons, quarks come in six varieties, or ‘flavours,’ divided into three generations. Unlike leptons, however, quarks never exist alone-they are always combined with other quarks. In fact, quarks cannot be isolated even with the most advanced laboratory equipment and processes. Scientists have had to determine the charges and approximate masses of quarks mathematically by studying particles that contain quarks.
Quarks are unique among all elementary particles in that they have fractional electric charges-either +? or -? . In an observable particle, the fractional charges of quarks in the particle add up to an integer charge for the combination.
The first generation quarks are designated up (u) and down (d); the second generation, charm and strange (s); and the third generation, top (t) and bottom (b). The odd names for quarks do not describe any aspect of the particles; they merely give scientists a way to refer to a particular type of quark.
The up quark and the down quark make up protons and neutrons in atoms, as described below. The up quark has an electric charge of +? , and the down quark has a charge of ~? . The second generation quarks have greater mass than those in the first generation. The charm quark has an electric charge of +? , and the strange quark has a charge of ~? . The heaviest quarks are the third generation top and bottom quarks. Some scientists originally called the top and bottom quarks truth and beauty, but those names have dropped out of use. The top quark has an electric charge of +? , and the bottom quark has a charge of ~? . The up quark, the charm quark, and the top quark behave similarly and are called up-type quarks. The down quark, the strange quark, and the bottom quark are called down-type quarks because they share the same electric charge.
Particles made of quarks are called hadrons (pronounced HA-dronz). Hadrons are not fundamental, since they consist of quarks, but they are commonly included in discussions of elementary particles. Two classes of hadrons can be found in nature: mesons (pronounced ME-zonz) and baryons (pronounced BARE-ee-onz).
Mesons contain a quark and an antiquark (the antiparticle partner of the quark). Since they contain two fermions, mesons are bosons. The first meson that scientists detected was the pion. Pions exist as intermediary particles in the nuclei of atoms, forming from and being absorbed by protons and neutrons. The pion comes in three varieties: a positive pion (p+), a negative pion (p-), and an electrically neutral pion (p0). The positive pion consists of an up quark and a down antiquark. The up quark has charge +? and the down antiquark has charge +? , so the charge on the positive pion is +1. Positive pions have an average lifetime of 26 nanoseconds (a nanosecond is one-billionth of a second). The negative pion contains an up antiquark and a down quark, so the charge on the negative pion is~? Besides ~ ? , or -1. It has the same mass and average lifetime as the positive pion. The neutral pion contains an up quark and an up antiquark, so the electric charges cancel each other. It has an average lifetime of 9 femtoseconds (a femtosecond is one-quadrillionth of a second).
Many other mesons exist. All six quarks play a part in the formation of mesons, although mesons containing heavier quarks like the top quark have very short lifetimes. Other mesons include the Kaons (pronounced KAY-ons) and the D particles. Kaons (Κ?) Ds comes in several different varieties, just as pions do. All varieties of Kaons and some varieties of Ds contain either a strange quark or a strange antiquark. All Ds contains either a charm quark or a charm antiquark.
Three quarks together form a baryon. A baryon contains an odd number of fermions, so it is a fermion itself. Protons, the positively charged particles in all atomic nuclei, are baryons that consist of two up quarks and a down quark. Adding the charges of two up quarks and a down quark, +? In addition +? Moreover ~ ? , produces a net charge of +1, the charge of the proton. Protons have never been observed to decay.
The neutrons found inside atoms are baryons as well. A neutron consists of one up quark and two down quarks. Adding these charges gives +? plus ~ ? plus ~ ? for a net charge of 0, making the neutron electrically neutral. Neutrons have a greater mass than protons and an average lifetime of 930 seconds.
Many other baryons exist, and many contain quarks other than the up and down flavours. For example, lambda and sigma (S) particles contain strange, charm, or bottom quarks. For lambda particles, the average lifespan ranges from 200 femtoseconds to 1.2 picoseconds. The average lifetime of sigma particles ranges from 0.0007 femtoseconds to 150 picoseconds.
British physicist Paul Dirac proposed an early theory of particle interactions in 1928. His theory predicted the existence of antiparticles, which combine to form antimatter. Antiparticles have the same mass as their normal particle counterparts, but they have several opposite quantities, such as electric charge and colour charge. Colour charge determines how particles react with one another under the strong force (the force that holds the nuclei of atoms together, just as electric charge determines how particles react to one another under the electromagnetic force). The antiparticles of fermions are also fermions, and the antiparticles of bosons are bosons.
All fermions have antiparticles. The antiparticle of an electron is called the positron (pronounced POZ-i-tron). The antiparticle of the proton is the antiproton. The antiproton consists of antiquarks, and two up antiquarks and one down antiquark. Antiquarks have the opposite electric and colour charges of their counterparts. The antiparticles of neutrinos are called antineutrinos. Both neutrinos and antineutrinos have no electric charge or colour charge, but physicists still consider them distinct from one another. Neutrinos and antineutrinos behave differently when they collide with other particles and in radioactive decay. When a particle decays, for example, an antineutrino accompanies the production of a charged lepton, and a neutrino accompanies the production of a charged antilepton. In addition, reactions that absorb neutrinos do not absorb antineutrinos, giving further evidence of the distinction between neutrinos and antineutrinos.
When a particle and its associated antiparticle collide, they annihilate, or destroy, each other, creating a tiny burst of energy. Particle-antiparticle collisions would provide a very efficient source of energy if large numbers of antiparticles could be harnessed cheaply. Physicists already make use of this energy in machines called particle accelerators. Particle accelerators increase the speed (and therefore energy) of elementary particles and make the particles collide with one another. When particles and antiparticles (such as protons and antiprotons) collide, their kinetic energy and the energy released when they annihilate each other converts to matter, creating new and unusual particles for physicists to study.
Particle-antiparticle collisions could someday fuel spacecraft, which need only a slight push to change their speed or direction in the vacuum of space. The antiparticles and particles would have to be kept away from each other until the spacecraft needed the energy of their collisions. Finely tuned, magnetic fields could be used to trap the particles and keep them separate, but these magnetic fields are difficult to set up and maintain. At the end of the 20th century, technology was not advanced enough to allow spacecraft to carry the equipment and particles necessary for using particle-antiparticle collisions as fuel.
All of the known forces in our universe can be classified as one of four types: electromagnetic, strong, weak, or gravitational. These forces affect everything in the universe. The electromagnetic force binds electrons to the atoms that compose our bodies, the objects around us, the Earth, the planets, and the Moon. The strong nuclear force holds together the nuclei inside the atoms that compose matter. Reactions due to the weak nuclear force fuel the Sun, providing light and heat. Gravity holds people and objects to the ground.
Each force has a particular property associated with it, such as electric charge for the electromagnetic force. Elementary particles that do not have electric charge, such as neutrinos, are electrically neutral and are not affected by the electromagnetic force.
Mechanical forces, such as the force used to push a child on a swing, result from the electrical repulsion between electrons and are thus electromagnetic. Even though a parent pushing a child on a swing feels his or her hands touching the child, the atoms in the parent’s hands never come into contact with the atoms of the child. The electrons in the parent’s-s repel those in the child while remaining a slight distance away from them. In a similar manner, the Sun attracts Earth through gravity, without Earth ever contacting the Sun. Physicists call these forces nonlocal, because the forces appear to affect objects that are not in the same location, but at a distance from one another.
Theories about elementary particles, however, require forces to be local-that is, the objects affecting each other must come into contact. Scientists achieved this locality by introducing the idea of elementary particles that carry the force from one object to another. Experiments have confirmed the existence of many of these particles. In the case of electromagnetism, a particle called a photon travels between the two repelling electrons. One electron releases the photon and recoils, while the other electron absorbs it and is pushed away.
Each of the four forces has one or unique force carriers, such as the photon, associated with it. These force carrier particles are bosons, since they do not obey the exclusion principle-any number of force carriers can have the same characteristics. They are also believed to be fundamental, so they cannot be split into smaller particles. Other than the fact that they are all fundamental bosons, the force carriers have very few common features. They are as unique as the forces they carry.
For centuries, electricity and magnetism seemed distinct forces. In the 1800s, however, experiments showed many connections between these two forces. In 1864 British physicist James Clerk Maxwell drew together the work of many physicists to show that electricity and magnetism are different aspects of the same electromagnetic force. This force causes particles with similar electric charges to repel one another and particles with opposite charges to attract one another. Maxwell also showed that light is a travelling form of electromagnetic energy. The founders of quantum mechanics took Maxwell’s work one step further. In 1925 German-British physicist Max Born, and German physicists Ernst Pascual Jordan and Werner Heisenberg showed mathematically that packets of light energy, later called photons, are emitted and absorbed when charged particles attract or repel each other through the electromagnetic force.
Any particle with electric charge, such as a quark or an electron, is subject to, or ‘feels,’ the electromagnetic force. Electrically neutral particles, such as neutrinos, do not feel it. The electric charge of a hadron is the sum of the charges on the quarks in the hadron. If the sum is zero, the electromagnetic force does not affect the hadron, although it does affect the quarks inside the hadron. Photons carry the electromagnetic force between particles but have no mass or electric charge themselves. Since photons have no electric charge, they are not affected by the force they carry.
Unlike neutrinos and some other electrically neutral particles, the photon does not have a distinct antiparticle. Particles that have antiparticles are like positive and negative numbers-they are each the other’s additive inverse. Photons are like the number zero, which is its own additive inverse. In effect, a photon is its own antiparticle.
In one example of the electromagnetic force, two electrons repel each other because they both have negative electric charges. One electron releases a photon, and the other electron absorbs it. Even though photons have no mass, their energy gives them momentum, a property that enables them to affect other particles. The momentum of the photon pushes the two electrons apart, just as the momentum of a basketball tossed between two ice skaters will push the skaters apart. For more information about electromagnetic radiation and particle physics.
Quarks and particles made of quarks attract each other through the strong force. The strong force holds the quarks in protons and neutrons together, and it holds protons and neutrons together in the nuclei. If electromagnetism were the only force between quarks, the two up quarks in a proton would repel each other because they are both positively charged. (The up quarks are also attracted to the negatively charged down quark in the proton, but this attraction is not as great as the repulsion between the up quarks.) However, the strong force is stronger than the electromagnetic force, so it glues the quarks inside the proton together.
A property of particles called colour charge determines how the strong force affects them. The term colour charge has nothing to do with colour in the usual sense; it is just a convenient way for scientists to describe this property of particles. Colour charge is similar to electric charge, which determines a particle’s electromagnetic interactions. Quarks can have a colour charge of red, blue, or green. Antiquarks can have a colour charge of anti-red (also called cyan), anti-blue (also called yellow), or anti-green (also called magenta). Quark types and colours are not linked-quarks, for example, may be red, green, or blue.
All observed objects carry a colour charge of zero, so quarks (which compose matter) must combine to form hadrons that are colourless, or colour neutral. The colour charges of the quarks in hadrons therefore cancel one another. Mesons contain a quark of one colour and an antiquark of the quark’s anti-colour. The colour charges cancel each other out and make the meson white, or colourless. Baryons contain three quarks, each with a different colour. As with light, the colour’s red, blue, and green combine to produce white, so the baryon is white, or colourless.
The bosons that carry the strong force between particles are called gluons. Gluons have no mass or electric charge and, like photons, they are their own antiparticle. Unlike photons, however, gluons do have colour charge. They carry a colour and an anticolour. Possible gluon colour combinations include red-antiblue, green-antired, and blue-antigreen. Because gluons carry colour charge, they can attract each other, while the colourless, electrically neutral photons cannot. Colours and anticolour attract each other, so gluons that carry one colour will attract gluons that carry the associated anticolour.
Gluons carry the strong force by moving between quarks and antiquarks and changing the colours of these particles. Quarks and antiquarks in hadrons constantly exchange gluons, changing colours as they emit and absorb gluons. Baryons and mesons are all colourless, so each time a quark or antiquark changes colour, other quarks or antiquarks in the particle must change colour as well to preserve the balance. The constant exchange of gluons and colour charge inside mesons and baryons creates a colour force field that holds the particles together.
The strong force is the strongest of the four forces in atoms. Quarks are bound so tightly to each other that they cannot be isolated. Separating a quark from an antiquark requires more energy than creating a quark and antiquark does. Attempting to pull apart a meson, then, just creates another meson: The quark in the original meson combines with a newly created antiquark, and the antiquark in the original meson combines with a newly created quark.
In addition to holding quarks together in mesons and baryons, gluons and the strong force also attract mesons and baryons to one another. The nuclei of s contain two kinds of baryons: protons and neutrons. Protons and neutrons are colourless, so the strong force does not attract them to each other directly. Instead, the individual quarks in one neutron or proton attract the quarks of its neighbours. The pull of quarks toward each other, even though they occur in separate baryons, provides enough energy to create a quark-antiquark pair. This pair of particles forms a type of meson called a pion. The exchange of pions between neutrons and protons holds the baryons in the nucleus together. The strong force between baryons in the nucleus is called the residual strong force.
While the strong force holds the nucleus of an atom together, the weak force can make the nucleus decay, changing some of its particles into other particles. The weak force is so named because it is far weaker than the electromagnetic or strong forces. For example, an interaction involving the weak force is 10 quintillion (10 billion billion) times less likely to occur than an interaction involving the electromagnetic force. Three particles, called vector bosons, carry the weak force. The weak force equivalent to electric charge and colour charge is a property called weak hypercharge. Weak hypercharge determines whether the weak force will affect a particle. All fermions possess weak hypercharge, as do the vector bosons that carry the weak force.
All elementary particles, except the force carriers of the other forces and the Higgs boson, interact by means of the weak force. Yet the effects of the weak force are usually masked by the other, stronger forces. The weak force is not very significant when considering most of the interactions between two quarks. For example, the strong force completely overwhelms the weak force when a quark bounces off another quark. Nor does the weak force significantly affect interactions between two charged particles, such as the interaction between an electron and a proton. The electromagnetic force dominates those interactions.
The weak force becomes significant when an interaction does not involve the strong force or the electromagnetic force. For example, neutrinos have neither electric charge nor colour charge, so any interaction involving a neutrino must be due to either the weak force or the gravitational force. The gravitational force is even weaker than the weak force on the scale of elementary particles, so the weak force dominates in neutrino interactions.
One example of a weak interaction is beta decay involving the decay of a neutron. When a neutron decays, it turns into a proton and emits an electron and an electron antineutrino. The neutron and antineutrino are electrically neutral, ruling out the electromagnetic force as a cause. The antineutrino and electron are colourless, so the strong force is not at work. Beta decay is due solely to the weak force.
The weak force is carried by three vector bosons. These bosons are designated the W+, the W-, and the Z0. The W bosons are electrically charged (+1 and –1), so they can feel the electromagnetic force. These two bosons are each other’s antiparticle counterparts, while the Z0 is its own antiparticle. All three vector bosons are colourless. A distinctive feature of the vector bosons is their mass. The weak force is the only force carried by particles that have mass. These massive force carriers cannot travel as far as the massless force carriers of the three long-range forces, so the weak force acts over shorter distances than the other three forces.
When the weak force affects a particle, the particle emits one of the three weak vector bosons-W+, W-, or Z0 -and changes into a different particle. The weak vector boson then decays to produce other particles. In interactions that involve the W+ and W-, a particle changes into a particle with a different electric charge. For example, in beta decay, one of the down quarks in a neutron changes into an up quark and the neutron releases a W boson. This change in quark type converts the neutron (two down quarks and an up quark) to a proton (one down quark and two up quarks). The W boson released by the neutron could then decay into an electron and an electron antineutrino. In Z0 interactions, a particle changes into a particle with the same electric charge.
A quark or lepton can change into a different quark or lepton from another generation only by the weak interaction. Thus the weak force is the reason that all stable matter contains only first generation leptons and quarks. The second and third generation leptons and quarks are heavier than their first generation counterparts, so they quickly decay into the lighter first generation leptons and quarks by exchanging W and Z bosons. The first generation particles have no lighter counterparts into which they can decay, so they are stable.
Physicists call their goal of an overall theory a ‘theory of everything,’ because it would explain all four known forces in the universe and how these forces affect particles. In such a theory, the particles that carry the gravitational force would be called gravitons. Gravitons should share many characteristics with photons because, like electromagnetism, gravitation is a long-range force that gets weaker with distance. Gravitons should be massless and have no electric charge or colour charge. The graviton is the only force carrier not yet observed in an experiment.
Gravitation is the weakest of the four forces on the balance, but it can become extremely powerful on a cosmic scale. For instance, the gravitational force between Earth and the Sun holds Earth in orbit. Gravity can have large effects, because, unlike the electromagnetic force, it is always attractive. Every particle in your body has some tiny gravitational attraction to the ground. The innumerable tiny attractions add up, which is why you do not float off into space. The negative charge on electrons, however, cancels out the positive charge on the protons in your body, leaving you electrically neutral.
Another unique feature of gravitation is its universality, and every object is gravitationally attracted to every other object, even objects without mass. For example, the theory of relativity predicted that light should feel the gravitational force. Before Einstein, scientists thought that gravitational attraction depended only on mass. They thought that light, being massless, would not be attracted by gravitation. Relativity, however, holds that gravitational attraction depends on the energy of an object and that mass is just one possible form of energy. Einstein was proven correct in 1919, when astronomers observed that the gravitational attraction between light from distant stars and the Sun bends the path of the light around the Sun (Gravitational Lens).
The standard model of particle physics includes an elementary boson that is not a force carrier: the Higgs boson. Scientists have not yet detected the Higgs boson in an experiment, but they believe it gives elementary particles their mass. Composite particles receive their mass from their constituent particles, and in some cases, the energy involved in holding these particles together. For example, the mass of a neutron comes from the mass of its quarks and the energy of the strong force holding the quarks together. The quarks themselves, however, have no such source of mass, which is why physicists introduced the idea of the Higgs boson. Elementary particles should obtain their mass by interacting with the Higgs boson.
Scientists expect the mass of the Higgs boson to be large compared to that of most other fundamental particles. Physicists can create more massive particles by forcing smaller particles to collide at high speeds. The energy released in the collisions converts to matter. Producing the Higgs boson, with its relatively large mass, will require a tremendous amount of energy. Many scientists are searching for the Higgs boson using machines called particle colliders. Particle colliders shoot a beam of particles at a target or another beam of particles to produce new, more massive particles.
Scientific progress often occurs when people find connections between apparently unconnected phenomena. For example, 19th-century British physicist James Clerk Maxwell made a connection between electric forces on charged objects and the force on a moving charge due to a magnet. He deduced that the electric force and the magnetic force were just different aspects of the same force. His discovery led to a deeper understanding of electromagnetism.
The unification of electricity and magnetism and the discovery of the strong and weak nuclear forces in the mid20th century left physicists with four apparently independent forces: electromagnetism, the strong force, the weak force, and gravitation. Physicists believe they should be able to connect these forces with one unified theory, called a theory of everything (TOE). A TOE should explain all particles and particle interactions by demonstrating that these four forces are different aspects of one universal force. The theory should also explain why fermions come in three generations when all stable matter contains fermions from just the first generation.
Scientists also hope that in explaining the extra generations, a TOE will explain why particles have the masses they do. They would like an explanation of why the top quark is so much heavier than the other quarks and why neutrinos are so much lighter than the other fermions. The standard model does not address these questions, and scientists have had to determine the masses of particles by experiment rather than by theoretical calculations.
Unification of all of the forces, however, is not an easy task. Each force appears to have distinctive properties and unique force carriers. In addition, physicists have yet to describe successfully the gravitational force in terms of particles, as they have for the other three forces. Despite these daunting obstacles, particle physicists continue to seek a unified theory and have made some progress. Starting points for unification include the electroweak theory and grand unification theories.
The American physicists’ Sheldon Glashow and Steven Weinberg and Pakistani physicist Abdus Salam completed the first step toward finding a universal force in the 1960s with their standard model theory of particle physics. Using a branch of mathematics called group theory, they showed how the weak force and the electromagnetic force could be combined mathematically into a single electroweak force. The electromagnetic force seems much stronger than the weak force at low energies, but that disparity is due to the differences between the force carriers. At higher energies, the difference between the W and Z bosons of the weak force, which have mass, and the massless photons of the electromagnetic force becomes less significant, and the two forces become indistinguishable.
The standard model also uses group theory to describe the strong force, but scientists have not yet been able to unify the strong force with the electroweak force. The next step toward finding a TOE would be a grand unified theory (GUT), a theory that would unify the strong, electromagnetic, and weak forces (the forces currently described by the standard model). A GUT should describe all three forces as different aspects of one force. At high energies, the distinctions among the three aspects should disappear. The only force remaining would then be the gravitational force, which scientists have not been able to describe with particle theory.
One type of GUT contains a theory called Supersymmetry (SUSY), first suggested in 1971. Supersymmetric theories set rules for new symmetries, or pairings, between particles and interactions. The standard model, for example, requires that every particle have an associated antiparticle. In a similar manner, SUSY requires that every particle have an associated Supersymmetric partner. While particles and their associated antiparticles are either both fermions or bosons, the Supersymmetric partner of a fermion should be a boson, and the Supersymmetric partner of a boson should be a fermion. For example, the fermion electron should be paired with a boson called a selecton, and the fermion quarks with bosons called squarks. The force-carrying bosons, such as photons and gluons, should be paired with fermions, such as particles called photinos and gluinos. Scientists have yet to detect these super symmetric partners, but they believe the partners may be massive compared with known particles, and therefore require too much energy to create with current particle accelerators.
Another approach to grand unification involves string theories. British physicist Paul Dirac developed the first string theory in 1950. String theories describe elementary particles as loops of vibrating string. Scientists believe these strings are currently invisible to us because the vibrations do not occur in the four familiar dimensions of space and time-some string theories, for example, need as many as 26 dimensions to explain particles and particle interactions. Incorporating Supersymmetry with string theory results in theories of superstring. Superstring theories are one of the leading candidates in the quest to unify gravitation with the other forces. The mathematics of superstring theories incorporates gravity into particle physics easily. Many scientists, however, do not believe superstrings are the answers, because they have not detected the additional dimensions required by string theory.
Studying elementary particles requires specialized equipment, the skill of deduction, and much patience. All of the fundamental particles-leptons, quarks, force-carrying bosons, and the Higgs boson-appear to be ‘point particles.’ A point particle is infinitely small, and it exists at a certain point in space without taking up any space. These fundamental particles are therefore impossible to see directly, even with the most powerful microscopes. Instead, scientists must deduce the properties of a particle from the way it affects other objects.
In a way, studying an elementary particle is like tracking a white polar bear in a field of snow: The polar bear may be impossible to see, but you can see the tracks it left in the snow, you can find trees it clawed, and you can find the remains of polar bear meals. You might even smell or hear the polar bear. From these observations, you could determine the position of the polar bear, its speed (from the spacing of the paw prints), and its weight (from the depth of the paw prints). No one can see an elementary particle, but scientists can look at the tracks it leaves in detectors, and they can look at materials with which it has interacted. They can even measure electric and magnetic fields caused by electrically charged particles. From these observations, physicists can deduce the position of an elementary particle, its speed, its weight, and many other properties.
Most particles are extremely unstable, which means they decay into other particles very quickly. Only the proton, neutron, electron, photon, and neutrinos can be detected a significantly long time after they are created. Studying the other particles, such as mesons, the heavier baryons, and the heavier leptons, requires detectors that can take many (250,000 or more) measurements per second. In addition, these heavier particles do not naturally exist on the surface of Earth, so scientists must create them in the laboratory or look to natural laboratories, such as stars and Earth’s atmosphere. Creating these particles requires extremely high amounts of energy.
Particle physicists use large, specialized facilities to measure the effects of elementary particles. In some cases, they use particle accelerators and particle colliders to create the particles to be studied. Particle accelerators are huge devices that use electric and magnetic fields to speed up elementary particles. Particle colliders are chambers in which beams of accelerated elementary particles crash into one another. Scientists can also study elementary particles from outer space, from sources such as the Sun. Physicists use large particle detectors, complex machines with several different instruments, to measure many different properties of elementary particles. Particle traps slow down and isolate particles, allowing direct study of the particles’ properties.
When energetic particles collide, the energy released in the collision can convert to matter and produce new particles. The more energy produced in the collision, the heavier the new particles can be. Particle accelerators produce heavier elementary particles by accelerating beams of electrons, protons, or their antiparticles to very high energies. Once the accelerated particles reach the desired energy, scientists steer them into a collision. The particles can collide with a stationary object (in a fixed target experiment) or with another beam of accelerated particles (in a collider experiment).
Particle accelerators come in two basic types-linear accelerators and circular accelerators. Devices that accelerate particles in a straight line are called linear accelerators. They use electric fields to speed up charged particles. Traditional (not a flat screen) television sets and computer monitors use this method to accelerate electrons
Still, all the same, it came that on January 1, 2000, people around the world celebrated the arrival of a new millennium. Some observers noted that the Gregorian calendar, which most of the world uses, of which began in AD 1 and that the new millennium truly begins in 2001. This detail failed to stem millennial festivities, but the issue shed light on the arbitrary nature of the way human beings have measured time for . . . well. . . . several millennia.
Few people know that the fellow responsible for the dating of the year 2000 was a diminutive Christian monk who lived nearly 15 centuries ago. The Romans called him Dionysius Exiguus-literally, Dennis the Little. His stature, however, could not contain his colossal aspiration: to reorder time itself. The tiny monk's efforts paid off. His work helped establish the basis for the Gregorian calendar used today throughout the world.
Dennis the Little lived in Rome during the 6th century, a generation after the last emperor was deposed. The eternal city had collapsed into ruins: Its walls had been breached, its aqueducts were shattered, and its streets were eerily silent. A trained mathematician, Dennis spent his days at a complex now called the Vatican, writing church canons and thinking about time.
In the year that historians now know as 525, Pope John I asked Dennis to calculate the dates upon which future Easters would fall. Then, as now, this was a complicated task, given the formula adopted by the church some two centuries earlier -that Easter will fall on the first Sunday after the first full Moon following the spring equinox. Dennis carefully studied the positions of the Moon and the Sun and produced a chart of upcoming Easters, beginning in 532. A calendar beginning in the year 532 probably struck Dennis's contemporaries as strange. For them the year was either 1285, dated from the founding of Rome, or 248, based on a calendar that started with the first year of the reign of Emperor Diocletian.
Dennis approved of neither accepted date, especially not the one glorifying the reign of Diocletian, a notorious persecutor of Christians. Instead, Dennis calculated his years from the reputed birth date of Jesus Christ. Justifying his choice, Dennis wrote that he “preferred to count and denote the years from the incarnation of our Lord, in order to make the foundation of our hope better known. Dennis's preference appeared on his new Easter charts, which began with anno Domini nostri Jesu Christi DXXXII (Latin for “in the year of our Lord Jesus Christ 532”), or AD 532.
However, Dennis got his dates wrong. Modern biblical historians believe Jesus Christ was most likely born in 4 or 5 Bc, not in the year Dennis called AD 1, although no one knows for sure. The real 2,000-year anniversary of Jesus' birth was therefore probably 1996 or 1997. Dennis pegged the birth of Christ to the year AD 1, rather than AD0, for the simple reason that Roman numerals had no zero. The mathematical concept of zero did not reach Europe until some eight centuries later. So the wee abbot started with year 1, and 2,000 years from the start of year 1 is not January 1, 2000, but January 1, 2001-a date many people find far less interesting.
These errors, however, are hardly unique in the complicated history of the Gregorian calendar, which is essentially a story of attempts, and failures, to get time right. It was not until 1949, when Communist leader Mao Zedong seized power in China, that the Gregorian calendar became the world's most widely accepted dating system. Mao ordered the changeover, believing that replacing the ancient Chinese lunar calendar with the more accurate Gregorian calendar was central to China's march toward modernity.
Mao's order completed the world conquest of a calendar that takes its name from a 16th-century pope, Gregory XIII. Gregory earned his fame by revising the calendar already modified by Dennis and first launched by Roman leader Julius Caesar in 47 BC. Caesar, in turn, borrowed his calendar from the Egyptians, who invented their calendar some 4,000 years before that. On the long road to the Gregorian calendar, fragments of many other time-measuring schemes were incorporated-from India, Sumer, Babylon, Palestine, Arabia, and pagan Europe.
Despite persistent human efforts to track the passage of time, nearly every calendar ever created has been inaccurate. One reason is that the solar year (the precise amount of time it takes the Earth to revolve once around the Sun) runs an awkward 365.252199 days-hardly an easy number to calculate without modern instruments. Another complication is the tendency of the Earth to wobble and wiggle ever so slightly in its orbit, yanked this way and that by the Moon's elliptical orbit and by the gravitational tug of the Sun. As a result, each year varies in length by a few seconds, making the exact length of any given year extraordinarily difficult to pin down.
If this sounds like splitting hairs, it is. Yet it also highlights some of the difficulties faced by astronomers, kings, priests, and other calendar makers, who tracked the seasons to know when to plant crops, collect taxes, or follow religious rituals.
The first efforts to keep a record of time probably occurred tens of thousands of years ago, when ancient humans in Europe and Africa peered up at the Moon and realized that its phases recurred in a steady, predictable fashion. A few people scratched what they saw onto rocks and bones, creating what may have been the world's first calendars. Heady stuff for skin-clad hominids, these calendars enabled them to predict when the silvery light would be available to hunt or to raid rival clans and to know how many full Moons would pass before the chill of winter gave way to spring.
The atomic grid added a second to UTC. Millennium watchers everywhere began wondering whether they should add a second to the countless clocks on buildings, in shops, and in homes that are counting down the third millennium to the very second. Most, though not all, made the change, adding another second of uncertainty to the question of when the new millennium begins.
Always the calendar invented by Caesar and Dennis the Little moves forward, rushing toward the next millennium 1,000 years from now-the progression of days, weeks, months, and years that appears to be here to
stay, despite its flaws. Other calendars have been proposed to eliminate small errors in the Gregorian calendar. Some reformers, for example, support making the unequal months uniform by updating the ancient Egyptian scheme of 12 months of 30 days each, with 5 days remaining as holidays.
During the French Revolution, the government of France adopted the
Egyptian calendar and decreed 1792 the year 1, a system that lasted until Napoleon restored the Gregorian calendar in 1806. More recently the United Nations (UN) and the Congress of the United States have reconsidered this historic alternative, calling it the World Calendar. To date, however, people seem content to use an ancient calendar designed by a Roman conqueror and an obscure abbot rather than fixing it or making it more accurate. Perhaps most of us prefer the illusion of a fixed time-line over admitting that time has meaning only because we say it does.
EVOLVING PRINCIPLES OF THOUGHT
BOOK THREE
METAPHYSICAL THINKING
In whatever way or possibility, we should not be of the assumption of taking for granted, as no thoughtful conclusion should be lightly dismissed as fallacious in the study assembled through the phenomenon of consciousness. Becoming even more so, when exercising the ingenuous humanness that caution measures, that we must try to move ahead to reach forward into the positive conclusion to its topic.
Many writers, along with a few well-known ne
w-age gurus, have played fast and loosely with firm interpretations of some new but informal understanding grounded within the mental in some vague sense of cosmic consciousness. However, these new age nuances are ever so erroneously placed in the new-age section of a commercial bookstore and purchased by those interested in new-age literature, and they will be quite disappointed.
What makes our species unique is the ability to construct a virtual world in which the real world can be imaged and manipulated in abstract forms and idea. Evolution has produced hundreds of thousands of species with brains, in which tens of thousands of species with complex behavioural and learning abilities. In that respect are also many species in which sophisticated forms of group communication have evolved. For example, birds, primates, and social carnivores use extensive vocal and gestural repertoires to structure behaviour in large social groups. Although we share roughly 98 percent of our genes with our primate cousins, the course of human evolution widened the cognitive gap between us and all other species, including our cousins, into a yawning chasm.
Research in neuroscience has shown that language processing is a staggeringly complex phenomenon that places incredible demands on memory and learning. Language functions extend, for example, into all major lobes of the neocortex: Auditory opinion is associated with the temporal area; tactile information is associated with the parietal area, and attention, working memory, and planning are associated with the frontal cortex of the left or dominant hemisphere. The left prefrontal region is associated with verb and noun production tasks and in the retrieval of words representing action. Broca’s area, next to the mouth-tongue region of a motor cortex, is associated with vocalization in word formation, and Wernicke’s area, by the auditory cortex, is associated with sound analysis in the sequencing of words.
Lower brain regions, like the cerebellum, have also evolved in our species to help in language processing. Until recently, we thought the cerebellum to be exclusively involved with automatic or preprogrammed movements such as throwing a ball, jumping over a high hurdle or playing noted orchestrations as on a musical instrument. Imaging studies in neuroscience suggest, however, that the cerebellum awaken within the smoldering embers brought aflame by the sparks of awakening consciousness, to think communicatively during the spoken exchange. Mostly actuated when the psychological subject occurs in making difficult the word associations that the cerebellum plays a role in associations by providing access to automatic word sequences and by augmenting rapid shifts in attention.
The midbrain and brain stem, situated on top of the spinal cord, coordinate and articulate the numerous amounts of ideas and output systems that, to play an extreme and crucial role in the interplay through which we have adaptively adjusted and coordinated the distributable dynamic communicative functions. Vocalization has some special associations with the midbrain, which coordinates the interaction of the oral and respiratory tracks necessary to make speech sounds. Since this vocalization requires synchronous activity among oral, vocal, and respiratory muscles, these functions probably connect to a central site. This site resembles the central greyness founded around the brain. The central gray area links the reticular nuclei and brain stem motor nuclei to comprise a distributed network for sound production. While human speech is dependent on structures in the cerebral cortex, and on rapid movement of the oral and vocal muscles, this is not true for vocalisation in other mammals.
Research in neuroscience reveals that the human brain is a massively parallel system in which language processing is widely distributed. Computers generated images of human brains engaged in language processing reveals a hierarchical organization consisting of complicated clusters of brain areas that process different component functions in controlled time sequences. Language processing is clearly not accomplished by stand-alone or unitary modules that evolved with the addition of separate modules were eventually wired together on some neural circuit board.
Similarly, individual linguistic symbols are continued as given to clusters of distributed brain areas and are not in a particular area. The specific sound patterns of words may be produced in dedicated regions. All the same, the symbolic and referential relationships between words are generated through a convergence of neural codes from different and independent brain regions. The processes of words comprehension and retrieval result from combinations simpler associative processes in several separate brain regions that require input from other regions. The symbolic meaning of words, like the grammar that is essential for the construction of meaningful relationships between stings of words, is an emergent property from the complex interaction of several brain parts.
While the brain that evolved this capacity was obviously a product of Darwinian evolution, the most critical precondition for the evolution of brain cannot be simply explained in these terms. Darwinian evolution can explain why the creation of stone tools altered condition for survival in a ne ecological niche in which group living, pair bonding, and more complex social structures were critical to survival. Darwinian evolution can also explain why selective pressure in this new ecological niche favoured pre-adaptive changes required for symbolic commonisation. Nevertheless, as this communication resulted in increasingly more complex behaviour evolution began to take precedence of physical evolution in the sense that mutations resulting in enhanced social behaviour became selectively advantageously within the context of the social behaviour of hominids.
Although male and female hominids favoured pair bonding and created more complex social organizations in the interests of survival, the interplay between social evolution and biological evolution changed the terms of survival radically. The enhanced ability to use symbolic communication to construct of social interaction eventually made this communication the largest determinant of survival. Since this communication was based on a symbolic vocalization that requires the evolution of neural mechanisms and processes that did not evolve in any other species, this marked the emergence of a mental realm that would increasingly appear as separate nd distinct from the external material realm.
Nonetheless, if we could, for example, define all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the active experience of the world symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing about the neuronal processes involved. While one mode of understanding the situation necessarily displaces the other, both are required to achieve a complete understanding of the situation.
Most experts agree that our ancestries became knowledgeably articulated in the spoken exchange as based on complex grammar and syntax between two hundred thousand and some hundred thousand years ago. The mechanisms in the human brain that allowed for this great achievement clearly evolved, however, over great spans of time. In biology textbooks, the lists of prior adaptations that enhanced the ability of our ancestors to use communication normally include those that are inclining to inclinations to increase intelligence. As to approach a significant alteration of oral and auditory abilities, in that the separation or localization of functional representations is found on two sides of the brain. The evolution of some innate or hard wired grammar, however, when we look at how our ability to use language could have really evolved over the entire course of hominid evolution. The process seems more basic and more counterintuitive than we had previously imagined.
Although we share some aspects of vocalization with our primate cousins, the mechanisms of human vocalization are quite different and have evolved over great spans of time. Incremental increases in hominid brain size over the last 2.5 million years enhanced cortical control over the larynx, which originally evolved to prevent food and other particles from entering the windpipe or trachea; This eventually contributed to the use of vocal symbolization. Humans have more voluntary motor control over sound produced in the larynx than any other vocal species, and this control are associated with higher brain systems involved in skeletal muscle control as opposed to just visceral control. As a result, humans have direct cortical motor control over phonation and oral movement while chimps do not.
The larynx in modern humans is positioned in a comparatively low position to the throat and significantly increases the range and flexibility of sound production. The low position of the larynx allows greater changes in the volume to the resonant chamber formed by the mouth and pharynx and makes it easier to shift sounds to the mouth and away from the nasal cavity. Formidable conclusions are those of the sounds that comprise vowel components of speeches that become much more variable, including extremes in resonance combinations such as the “ee” sound in “tree” and the “aw” sound in “flaw.” Equally important, the repositioning of the larynx dramatically increases the ability of the mouth and tongue to modify vocal sounds. This shift in the larynx also makes it more likely that food and water passing over the larynx will enter the trachea, and this explains why humans are more inclined to experience choking. Yet this disadvantage, which could have caused the shift to e selected against, was clearly out-weighed by the advantage of being able to produce all the sounds used in modern language systems.
Some have argued that this removal of constraints on vocalization suggests that spoken language based on complex symbol systems emerged quite suddenly in modern humans only about one hundred thousand years ago. It is, however, far more likely that language use began with very primitive symbolic systems and evolved over time to increasingly complex systems. The first symbolic systems were not full-blown language systems, and they were probably not as flexible and complex as the vocal calls and gestural displays of modern primates. The first users of primitive symbolic systems probably coordinated most of their social comminations with call and display behavioural attitudes alike those of the modern ape and monkeys.
Critically important to the evolution of enhanced language skills are that behavioural adaptive adjustments that serve to precede and situate biological changes. This represents a reversal of the usual course of evolution where biological change precedes behavioural adaption. When the first hominids began to use stone tools, they probably rendered of a very haphazard fashion, by drawing on their flexible ape-like learning abilities. Still, the use of this technology over time opened a new ecological niche where selective pressures occasioned new adaptions. A tool use became more indispensable for obtaining food and organized social behaviours, mutations that enhanced the use of tools probably functioned as a principal source of selection for both bodied and brains.
The first stone choppers appear in the fossil remnant fragments remaining about 2.5 million years ago, and they appear to have been fabricated with a few sharp blows of stone on stone. If these primitive tools are reasonable, which were hand-held and probably used to cut flesh and to chip bone to expose the marrow, were created by Homo habilis-the first large-brained hominid. Stone making is obviously a skill passed on from one generation to the next by learning as opposed to a physical trait passed on genetically. After these tools became critical to survival, this introduced selection for learning abilities that did not exist for other species. Although the early tool maskers may have had brains roughly comparable to those of modern apes, they were already confronting the processes for being adapted for symbol learning.
The first symbolic representations were probably associated with social adaptations that were quite fragile, and any support that could reinforce these adaptions in the interest of survival would have been favoured by evolution. The expansion of the forebrain in Homo habilis, particularly the prefrontal cortex, was on of the core adaptations. Increased connectivity enhanced this adaption over time to brain regions involved in language processing.
Imagining why incremental improvements in symbolic representations provided a selective advantage is easy. Symbolic communication probably enhanced cooperation in the relationship of mothers to infants, allowed forgoing techniques to be more easily learned, served as the basis for better coordinating scavenging and hunting activities, and generally improved the prospect of attracting a mate. As the list of domains in which symbolic communication was introduced became longer over time, this probably resulted in new selective pressures that served to make this communication more elaborate. After more functions became dependent on this communication, those who failed in symbol learning or could only use symbols awkwardly were less likely to pass on their genes to subsequent generations.
The crude language of the earliest users of symbolics must have been considerably gestured and nonsymbiotic vocalizations. Their spoken language probably became reactively independent and a closed cooperative system. Only after the emergence of hominids were to use symbolic communication evolved, symbolic forms progressively took over functions served by non-anecdotical symbolic forms. This is reflected in modern languages. The structure of syntax in these languages often reveals its origins in pointing gestures, in the manipulation and exchange of objects, and in more primitive constructions of spatial and temporal relationships. We still use nonverbal vocalizations and gestures to complement meaning in spoken language.
The general idea is very powerful, however, the relevance of spatiality to self-consciousness comes about not merely because the world is spatial but also because the self-conscious subject is a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be ware that one is a spatial element of the world without a grasp of the spatial nature of the world. Face to face, the idea of a perceivable, objective spatial world that causes ideas too subjectively becoming to denote in the wold. During which time, his perceptions as they have of changing position within the world and to the essentially stable way the world is. The idea that there is an objective world and the idea that the subject is somewhere, and where he is given by what he can perceive.
Research, however distant, are those that neuroscience reveals in that the human brain is a massive parallel system which language processing is widely distributed. Computers generated images of human brains engaged in language processing reveals a hierarchal organization consisting of complicated clusters of brain areas that process different component functions in controlled time sequences. Language processing is clearly not accomplished by stand-alone or unitary modules that evolved with the addition of separate modules that were eventually wired together on some neutral circuit board.
While the brain that evolved this capacity was obviously a product of Darwinian evolution, he realized that the different chances of survival of different endowed offsprings could account for the natural evolution of species. Nature “selects” those members of some spacies best adapted to the environment in which they are themselves, just as human animal breeders may select for desirable traits for their livestock, and by that control the evolution of the kind of animal they wish. In the phase of Spencer, nature guarantees the “survival of the fittest.” The Origin of Species was principally successful in marshalling the evidence for evolution, than providing a convincing mechanism for genetic change, and Darwin himself remained open to the search for additional mechanisms, also reaming convinced that natural selection was at the heat of it. It was only with the later discovery of the “gene” as the unit of inheritance that the syntheses known as “neo-Darwinism” became the orthodox theory of evolution.
The solutions to the mysterious evolution by natural selection can shape sophisticated mechanisms are to found in the working of natural section, in that for the sake of some purpose, namely, some action, the body as a whole must evidently exist for the sake of some complex action: The process is fundamentally very simple as natural selection occurs whenever genetically influence’s variation among individual effects their survival and reproduction. If a gene codes for characteristics that result in fewer viable offspring in future generations, that gene is gradually eliminated. For instance, genetic mutation that an increase vulnerability to infection, or cause foolish risk taking or lack of interest in sex, will never become common. On the other hand, genes that cause resistance that causes infection, appropriate risk taking and success in choosing fertile mates are likely to spread in the gene pool even if they have substantial costs.
A classical example is the spread of a gene for dark wing colour in a British moth population living downward form major source of air pollution. Pale moths were conspicuous on smoke-darkened trees and easily caught by birds, while a rare mutant form of a moth whose colour closely matched that of the bark escaped the predator beaks. As the tree trucks became darkened, the mutant gene spread rapidly and largely displaced the gene for pale wing colour. That is all on that point to say is that natural selection insole no plan, no goal, and no direction-just genes increasing and decreasing in frequency depending on whether individuals with these genes have, compared with order individuals, greater of lesser reproductive success.
The simplicity of natural selection has been obscured by many misconceptions. For instance, Herbert Spencer’s nineteenth-century catch phrase “survival of the fittest” is widely thought to summarize the process, but an abstractive actuality openly provides a given forwarding to several misunderstandings. First, survival is of no consequence by itself. This is why natural selection has created some organisms, such as salmon and annual plants, that reproduces only once, the die. Survival increases fitness only insofar as it increases later reproduction. Genes that increase lifetime reproduction will be selected for even if they result in a reduced longevity. Conversely, a gene that deceases total lifetime reproduction will obviously be eliminated by selection even if it increases an individual’s survival.
Further confusion arises from the ambiguous meaning of “fittest.” The fittest individuals in the biological scene, is not necessarily the healthiest, stronger, or fastest. In today’s world, and many of those of the past, individuals of outstanding athletic accomplishment need not be the ones who produce the most grandchildren, a measure that should be roughly correlated with fattiness. To someone who understands natural selection, it is no surprise that the parents who are not concerned about their children;’s reproduction.
A gene or an individual cannot be called “fit” in isolation but only with reference to some particular spacies in a particular environment. Even in a single environment, every gene involves compromise. Consider a gene that makes rabbits more fearful and thereby helps to keep then from the jaws of foxes. Imagine that half the rabbits in a field have this gene. Because they do more hiding and less eating, these timid rabbits might be, on average, some bitless well fed than their bolder companions. Of, a hundred downbounded in the March swamps awaiting for spring, two thirds of them starve to death while this is the fate of only one-third of the rabbits who lack the gene for fearfulness, it has been selected against. It might be nearly eliminated by a few harsh winters. Milder winters or an increased number of foxes could have the opposite effect, but it all depends on the current environment.
The version of an evolutionary ethic called “social Darwinism” emphasizes the struggle for natural selection, and draws the conclusion that we should glorify the assists each struggle, usually by enhancing competitive and aggressive relations between people in society, or better societies themselves. More recently the reaction between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.
The most critical precondition for the evolution of this brain cannot be simply explained in these terms. Darwinian evolution can explain why the creation of stone tools altered conditions for survival in a new ecological niche in which group living, pair bonding, and more complex social structures were critical to survival. Darwinian evolution can also explain why selective pressures in this new ecological niche favoured pre-adaptive changes required for symbolic communication. All the same, this communication resulted directly through its passing an increasingly atypically structural complex and intensively condensed behaviour. Social evolution began to take precedence over physical evolution in the sense that mutations resulting in enhanced social behaviour became selectively advantageously within the context of the social behaviour of hominids.
Because this communication was based on symbolic vocalization that required the evolution of neural mechanisms and processes that did not evolve in any other species. As this marked the emergence of a mental realm that would increasingly appear as separate and distinct from the external material realm.
If the emergent reality in this mental realm cannot be reduced to, or entirely explained as for, the sum of its parts, concluding that this reality is greater than the sum of its parts seems reasonable. For example, a complete proceeding of the manner in which light in particular wave lengths has ben advancing by the human brain to generate a particular colour says nothing about the experience of colour. In other words, a complete scientific description of all the mechanisms involved in processing the colour blue does not correspond with the colour blue as perceived in human consciousness. No scientific description of the physical substrate of a thought or feeling, no matter how accomplish it can but be accounted for in actualized experience, especially of a thought or feeling, as an emergent aspect of global brain function.
If we could, for example, define all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the experience of the word symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing about the neuronal processes involved. While one mode of understanding the situation necessarily displaces the other, both are required to achieve a complete understanding of the situation.
Even if we are to include two aspects of biological reality, finding to a more complex order in biological reality is associated with the emergence of new wholes that are greater than the orbital parts. Yet, the entire biosphere is of a whole that displays self-regulating behaviour that is greater than the sum of its parts. The emergence of a symbolic universe based on a complex language system could be viewed as another stage in the evolution of more complicated and complex systems. As marked and noted by the appearance of a new profound complementarity in relationships between parts and wholes. This does not allow us to assume that human consciousness was in any sense preordained or predestined by natural process. Even so, it does make it possible, in philosophical terms at least, to argue that this consciousness is an emergent aspect of the self-organizing properties of biological life.
If we also concede that an indivisible whole contains, by definition, no separate parts and that a phenomenon can be assumed to be “real” only when it is “observed” phenomenon, we are led to more interesting conclusions. The indivisible whole whose existence is inferred in the results of the aspectual experiments that cannot in principle is itself the subject of scientific investigation. In that respect, no simple reason of why this is the case. Science can claim knowledge of physical reality only when the predictions of a physical theory are validated by experiment. Since the indivisible whole cannot be measured or observed, we encounter by engaging the “eventful horizon” or knowledge where science can say nothing about the actual character of this reality. Why this is so, is a property of the entire universe, then we must also conclude that undivided wholeness exists on the most primary and basic level in all aspects of physical reality. What we are dealing within science per se, however, are manifestations of tis reality, which are invoked or “actualized” in making acts of observation or measurement. Since the reality that exists between the space-like separated regions is a whole whose existence can only be inferred in experience. As opposed to proven experiment, the correlations between the particles, and the sum of these parts, do not constitute the “indivisible” whole. Physical theory allows us to understand why the correlations occur. Nevertheless, it cannot in principle disclose or describe the actualized character of the indivisible whole.
The scientific implications to this extraordinary relationship between parts ( in that, to know what it is like to have an experience is to know its qualia) and indivisible whole (the universe) are quite staggering. Our primary concern, however, is a new view of the relationship between mind and world that carries even larger implications in human terms. When factors into our understanding of the relationship between parts and wholes in physics and biology, then mind, or human consciousness, must be viewed as an emergent phenomenon in a seamlessly interconnected whole called the cosmos.
All that is required to embrace the alternative view of the relationship between mind and world that are consistent with our most advanced scientific knowledge is a commitment to metaphysical and epistemological realism and a willingness to follow arguments to their logical conclusions. Metaphysical realism assumes that physical reality or has an actual existence independent of human observers or any act of observation, epistemological realism assumes that progress in science requires strict adherence to scientific mythology, or to the rules and procedures for doing science. If one can accept these assumptions, most of the conclusions drawn should appear self-evident in logical and philosophical terms. Attributing any extra-scientific properties to the whole to understand is also not necessary and embrace the new relationship between part and whole and the alternative view of human consciousness that is consistent with this relationship. This is, in this that our distinguishing character between what can be “proven” in scientific terms and what can be reasonably “inferred” in philosophical terms based on the scientific evidence.
Moreover, advances in scientific knowledge rapidly became the basis for the creation of a host of new technologies. Yet are those responsible for evaluating the benefits and risks associated with the use of these technologies, much less their potential impact on human needs and values, normally have expertise on only one side of a two-culture divide. Perhaps, more important, many potential threats to the human future-such as, to, environmental pollution, arms development, overpopulation, and spread of infectious diseases, poverty, and starvation-can be effectively solved only by integrating scientific knowledge with knowledge from the social sciences and humanities. We have not done so for a simple reason-the implications of the amazing new fact of nature named for by non-locality, and cannot be properly understood without some familiarity with the actual history of scientific thought. The intent is to suggest that what is most important about this back-ground can be understood in its absence. Those who do not wish to struggle with the small and perhaps, fewer resultant amounts of back-ground implications should feel free to ignore it. Yet this material will be no more challenging as such, that the hope is that from those of which will find a common ground for understanding and that will meet again on this commonly functions as addressed to the relinquishing clasp of closure, and unswervingly close of its circle, resolve in the equations of eternity and complete of the universe of its obtainable gains for which its unification holds all that should be.
Another aspect of the evolution of a brain that allowed us to construct symbolic universes based on complex language system that is particularly relevant for our purposes concerns consciousness of self. Consciousness of self as an independent agency or actor is predicted on a fundamental distinction or dichotomy between this self and the other selves. Self, as it is constructed in human subjective reality, is perceived as having an independent existence and a self-referential character in a mental realm separately distinct from the material realm. It was, the assumed separation between these realms that led Descartes to posit his famous dualism in understanding the nature of consciousness in the mechanistic classical universe.
In a thought experiment, instead of bringing a course of events, as in a normal experiment, we are invited to imagine one. We may tenably be able to “see” that some result’s following, or that by some description is appropriate, or our inability to describe the situation may itself have some consequential consequence. Thought experiments played a major role in the development of physics: For example, Galileo probably never dropped two balls of unequal weight from the leaning Tower of Pisa, to refute the Aristotelean view that a heavy body falls faster than a lighter one. He merely asked used to imagine a heavy body made into the shape of a dumbbell, and then connecting rod gradually thinner, until it is finally severed. The thing is one heavy body until the last moment and he n two light ones, but it is incredible that this final snip alters the velocity dramatically. Other famous examples include the Einstein-Podolsky-Rosen thought experiment. In the philosophy of personal identity, our apparent capacity to imagine ourselves surviving drastic changes of body, brain, and mind is a permanent source of difficulty. On that point, no consensus on the legitimate place of thought experiments, to substitute either for real experiment, or as a reliable device for discerning possibilities. Though experiments with and one dislike is sometimes called intuition pumps.
For overfamiliar reasons, of hypothesizing that people are characterized by their rationality is common, and the most evident display of our rationality is our capacity to think. This is the rehearsal in the mind of what to say, or what to do. Not all thinking is verbal, since chess players, composers and painters all think, and in that respect no deductive reason that their deliberations should take any more verbal a form than this action. It is permanently tempting to conceive of this activity as for the presence inbounded in the mind of elements of some language, or other medium that represents aspects of the world. In whatever manner, the model has been attacked, notably by Wittgenstein, as insufficient, since no such presence could carry a guarantee that the right use would be made of it. And such of an inner present seems unnecessary, since an intelligent outcome might arouse of some principal measure from it.
In the philosophy of mind and ethics the treatment of animals exposes major problems if other animals differ from human beings, how is the difference to be characterized: Do animals think and reason, or have thoughts and beliefs? In philosophers as different as Aristotle and Kant the possession of reason separates humans from animals, and alone allows entry to the moral community.
For Descartes, animals are mere machines and ee lack consciousness or feelings. In the ancient world the rationality of animals is defended with the example of Chrysippus’ dog. This animal, tracking prey, comes to a cross-roads with three exits, and without pausing to pick-up the scent, reasoning, according to Sextus Empiricus. The animal went either by this road, or by this road, or by that, or by the other. However, it did not go by this or that. Therefore, he went the other way. The ‘syllogism of the dog’ was discussed by many writers, since in Stoic cosmology animals should occupy a place on the great chain of being to an exceeding degree below human beings, the only terrestrial rational agents: Philo Judaeus wrote a dialogue attempting to show again Alexander of Aphrodisias that the dog’s behaviour does no t exhibit rationality, but simply shows it following the scent, by way of response Alexander has the animal jump down a shaft (where the scent would not have lingered). Plutah sides with Philo, Aquinas discusses the dog and scholastic thought in general was quite favourable to brute intelligence (being made to stand trail for various offences in medieval times was common for animals). In the modern era Montaigne uses the dog to remind us of the frailties of human reason: Rorarious undertook to show not only that beasts are rational, but that they make better use of reason than people do. James the first of England defends the syllogising dog, sand Henry More and Gassendi both takes issue with Descartes on that matter. Hume is an outspoken defender of animal cognition, but with their use of the view that language is the essential manifestation of mentality, animals’ silence began to count heavily against them, and they are completely denied thoughts by, for instance Davidson.
Dogs are frequently shown in pictures of philosophers, as their assiduity and fidelity are some symbols
It is, nonetheless, that Decanters’s first work, the Regulae ad Directionem Ingenii (1628/9), was never complected, yet in Holland between 1628 and 1649, Descartes first wrote, and then cautiously suppressed, Le Monde (1934), and in 1637 produced the Discours de la méthode as a preface to the treatise on mathematics and physics in which he introduced the notion of Cartesian co-ordinates. His best-known philosophical work, the Meditationes de Prima Phi losophiia (Meditations on First Philosophy), together with objections by distinguished contemporaries and replies by Descartes (The Objections and Replies), appeared in 1641. The authors of the objections are: First set, the Dutch, thirst set, Hobbes, fourth set. Arnauld, fifth set, Gassendi and the sixth set, Mersenne. The second edition (1642) of the Meditations included a seventh se t by the Jesuit Pierre Bourdin. Descartes’s penultimate work, the Principia Pilosophiae (Principles of the Soul), published in 1644 was designed partly for use as a theological textbook. His last work was Les Passions de l´ame (The Passions of the Soul) published in 1649. When in Sweden, where he contracted pneumonia, allegedly through being required to break his normal habit of late rising in order to give lessons at 5:00 a.m. His last words are supposed to have been “Ça, mon âme, il faut partir” (so, my soul, it is time to part).
All the same, Descartes’s theory of knowledge starts with the quest for certainty, for an indubitable starting-point or foundation on the bassi alone of which progress is possible.
The Cartesian doubt is the method of investigating how much knowledge and its basis in reason or experience as used by Descartes in the first two Medications. It attempted to put knowledge upon secure foundation by first inviting us to suspend judgements on any proportion whose truth can be doubted, even as a bare possibility. The standards of acceptance are gradually raised as we are asked to doubt the deliverance of memory, the senses, and eve n reason, all of which are in principle capable of letting us down. This is eventually found in the celebrated “Cogito ergo sum”: I think, therefore I am. By locating the point of certainty in my awareness of my own self, Descartes gives a first-person twist to the theory of knowledge that dominated the following centuries in spite of a various counter-attack on behalf of social and public starting-points. The metaphysics associated with this priority are the Cartesian dualism, or separation of mind and matter into two different but interacting substances. Descartes rigorously and rightly to ascertain that it takes divine dispensation to certify any relationship between the two realms thus divided, and to prove the reliability of the senses invokes a “clear and distinct perception” of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: A Hume drily puts it, “to have recourse to the veracity of the supreme Being, in order to prove the veracity of our senses, is surely making a very unexpected circuit.”
By dissimilarity, Descartes’s notorious denial that non-human animals are conscious is a stark illustration of dissimulation. In his conception of matter Descartes also gives preference to rational cogitation over anything from the senses. Since we can conceive of the matter of a ball of wax, surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature.
Although the structure of Descartes’s epistemology, theory of mind and theory of matter have been rejected many times, their relentless exposure of the hardest issues, their exemplary clarity and even their initial plausibility, all contrives to make him the central point of reference for modern philosophy.
The term instinct (Lat., instinctus, impulse or urge) implies innately determined behaviour, flexible to change in circumstance outside the control of deliberation and reason. The view that animals accomplish even complex tasks not by reason was common to Aristotle and the Stoics, and the inflexibility of their outline was used in defence of this position as early as Avicennia. A continuity between animal and human reason was proposed by Hume, and followed by sensationalist such as the naturalist Erasmus Darwin (1731-1802). The theory of evolution prompted various views of the emergence of stereotypical behaviour, and the idea that innate determinants of behaviour are fostered by specific environments is a guiding principle of ethology. In this sense that being social may be instinctive in human beings, and for that matter too reasoned on what we now know about the evolution of human language abilities, however, our real or actualized self is clearly not imprisoned in our minds.
It is implicitly a part of the larger whole of biological life, human observers its existence from embedded relations to this whole, and constructs its reality as based on evolved mechanisms that exist in all human brains. This suggests that any sense of the “otherness” of self and world be is an illusion, in that disguises of its own actualization are to find all its relations between the part that are of their own characterization. Its self as related to the temporality of being whole is that of a biological reality. It can be viewed, of course, that a proper definition of this whole must not include the evolution of the larger indivisible whole. Yet, the cosmos and unbroken evolution of all life, by that of the first self-replication molecule that was the ancestor of DNA. It should include the complex interactions that have proven that among all the parts in biological reality that any resultant of emerging is self-regulating. This, of course, is responsible to properties owing to the whole of what might be to sustain the existence of the parts.
Founded on complications and complex coordinate systems in ordinary language may be conditioned as to establish some developments have been descriptively made by its physical reality and metaphysical concerns. That is, that it is in the history of mathematics and that the exchanges between the mega-narratives and frame tales of religion and science were critical factors in the minds of those who contributed. The first scientific revolution of the seventeenth century, allowed scientists to better them in the understudy of how the classical paradigm in physical reality has marked results in the stark Cartesian division between mind and world that became one of the most characteristic features of Western thought. This is not, however, another strident and ill-mannered diatribe against our misunderstandings, but drawn upon equivalent self realization and undivided wholeness or predicted characterlogic principles of physical reality and the epistemological foundations of physical theory.
The subjectivity of our mind affects our perceptions of the world that is held to be objective by natural science. Create both aspects of mind and matter as individualized forms that belong to the same underlying reality.
Our everyday experience confirms the apparent fact that there is a dual-valued world as subject and objects. We as having consciousness, as personality and as experiencing beings are the subjects, whereas for everything for which we can come up with a name or designation, seems to be the object, that which is opposed to us as a subject. Physical objects are only part of the object-world. There are also mental objects, objects of our emotions, abstract objects, religious objects etc. language objectifies our experience. Experiences per se are purely sensational experienced that do not make a distinction between object and subject. Only verbalized thought reifies the sensations by conceptualizing them and pigeonholing them into the given entities of language.
Some thinkers maintain, that subject and object are only different aspects of experience. I can experience myself as subject, and in the act of self-reflection. The fallacy of this argument is obvious: Being a subject implies having an object. We cannot experience something consciously without the mediation of understanding and mind. Our experience is already conceptualized at the time it comes into our consciousness. Our experience is negative insofar as it destroys the original pure experience. In a dialectical process of synthesis, the original pure experience becomes an object for us. The common state of our mind is only capable of apperceiving objects. Objects are reified negative experience. The same is true for the objective aspect of this theory: by objectifying myself I do not dispense with the subject, but the subject is causally and apodeictically linked to the object. As soon as I make an object of anything, I have to realize, that it is the subject, which objectifies something. It is only the subject who can do that. Without the subject there are no objects, and without objects there is no subject. This interdependence, however, is not to be understood in terms of a dualism, so that the object and the subject are really independent substances. Since the object is only created by the activity of the subject, and the subject is not a physical entity, but a mental one, we have to conclude then, that the subject-object dualism is purely mentalistic.
The Cartesian dualism posits the subject and the object as separate, independent and real substances, both of which have their ground and origin in the highest substance of God. Cartesian dualism, however, contradicts itself: The very fact, which Descartes posits the "I,” that is the subject, as the only certainty, he defied materialism, and thus the concept of some "res extensa.” The physical thing is only probable in its existence, whereas the mental thing is absolutely and necessarily certain. The subject is superior to the object. The object is only derived, but the subject is the original. This makes the object not only inferior in its substantive quality and in its essence, but relegates it to a level of dependence on the subject. The subject recognizes that the object is a "res extensa" and this means, that the object cannot have essence or existence without the acknowledgment through the subject. The subject posits the world in the first place and the subject is posited by God. Apart from the problem of interaction between these two different substances, Cartesian dualism is not eligible for explaining and understanding the subject-object relation.
By denying Cartesian dualism and resorting to monistic theories such as extreme idealism, materialism or positivism, the problem is not resolved either. What the positivists did, was just verbalizing the subject-object relation by linguistic forms. It was no longer a metaphysical problem, but only a linguistic problem. Our language has formed this object-subject dualism. These thinkers are very superficial and shallow thinkers, because they do not see that in the very act of their analysis they inevitably think in the mind-set of subject and object. By relativizing the object and subject in terms of language and analytical philosophy, they avoid the elusive and problematical oppure of subject-object, since which has been the fundamental question in philosophy ever. Shunning these metaphysical questions is no solution. Excluding something, by reducing it to a more material and verifiable level, is not only pseudo-philosophy but actually a depreciation and decadence of the great philosophical ideas of mankind.
Therefore, we have to come to grips with idea of subject-object in a new manner. We experience this dualism as a fact in our everyday lives. Every experience is subject to this dualistic pattern. The question, however, is, whether this underlying pattern of subject-object dualism is real or only mental. Science assumes it to be real. This assumption does not prove the reality of our experience, but only that with this method science is most successful in explaining our empirical facts. Mysticism, on the other hand, believes that there is an original unity of subject and objects. To attain this unity is the goal of religion and mysticism. Man has fallen from this unity by disgrace and by sinful behaviour. Now the task of man is to get back on track again and strive toward this highest fulfilment. Again, are we not, on the conclusion made above, forced to admit, that also the mystic way of thinking is only a pattern of the mind and, as the scientists, that they have their own frame of reference and methodology to explain the supra-sensible facts most successfully?
If we assume mind to be the originator of the subject-object dualism, then we cannot confer more reality on the physical or the mental aspect, as well as we cannot deny the one in terms of the other.
The crude language of the earliest users of symbolics must have been considerably gestured and nonsymbiotic vocalizations. Their spoken language probably became reactively independent and a closed cooperative system. Only after the emergence of hominids were to use symbolic communication evolved, symbolic forms progressively took over functions served by non-vocal symbolic forms. This is reflected in modern languages. The structure of syntax in these languages often reveals its origins in pointing gestures, in the manipulation and exchange of objects, and in more primitive constructions of spatial and temporal relationships. We still use nonverbal vocalizations and gestures to compliment meaning in spoken language.
The general idea is very powerful, however, the relevance of spatiality to self-consciousness comes about not merely because the world is spatial but also because the self-conscious subject is a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be ware that one is a spatial element of the world without a grasp of the spatial nature of the world. Face to face, the idea of a perceivable, objective spatial world that causes ideas too subjectively becoming to denote in the wold. During which time, his perceptions as they have of changing position within the world and to the more or less stable way the world is. The idea that there is an objective world and the idea that the subject is somewhere, and where he is given by what he can perceive.
Research, however distant, are those that neuroscience reveals in that the human brain is a massive parallel system which language processing is widely distributed. Computers generated images of human brains engaged in language processing reveals a hierarchal organization consisting of complicated clusters of brain areas that process different component functions in controlled time sequences. Language processing is clearly not accomplished by stand-alone or unitary modules that evolved with the addition of separate modules that were eventually wired together on some neutral circuit board.
While the brain that evolved this capacity was obviously a product of Darwinian evolution, the most critical precondition for the evolution of this brain cannot be simply explained in these terms. Darwinian evolution can explain why the creation of stone tools altered conditions for survival in a new ecological niche in which group living, pair bonding, and more complex social structures were critical to survival. Darwinian evolution can also explain why selective pressures in this new ecological niche favoured pre-adaptive changes required for symbolic communication. All the same, this communication resulted directly through its passing an increasingly atypically structural complex and intensively condensed behaviour. Social evolution began to take precedence over physical evolution in the sense that mutations resulting in enhanced social behaviour became selectively advantageously within the context of the social behaviour of hominids.
Because this communication was based on symbolic vocalization that required the evolution of neural mechanisms and processes that did not evolve in any other species. As this marked the emergence of a mental realm that would increasingly appear as separate and distinct from the external material realm.
If the emergent reality in this mental realm cannot be reduced to, or entirely explained as for, the sum of its parts, concluding that this reality is greater than the sum of its parts seems reasonable. For example, a complete proceeding of the manner in which light in particular wave lengths has ben advancing by the human brain to generate a particular colour says nothing about the experience of colour. In other words, a complete scientific description of all the mechanisms involved in processing the colour blue does not correspond with the colour blue as perceived in human consciousness. No scientific description of the physical substrate of a thought or feeling, no matter how accomplish it can but be accounted for in actualized experience, especially of a thought or feeling, as an emergent aspect of global brain function.
If we could, for example, define all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the experience of the word symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing about the neuronal processes involved. While one mode of understanding the situation necessarily displaces the other, both are required to achieve a complete understanding of the situation.
Even if we are to include two aspects of biological reality, finding to a more complex order in biological reality is associated with the emergence of new wholes that are greater than the orbital parts. Yet, the entire biosphere is of a whole that displays self-regulating behaviour that is greater than the sum of its parts. The emergence of a symbolic universe based on a complex language system could be viewed as another stage in the evolution of more complicated and complex systems. As marked and noted by the appearance of a new profound complementarity in relationships between parts and wholes. This does not allow us to assume that human consciousness was in any sense preordained or predestined by natural process. Thus far it does make it possible, in philosophical terms at least, to argue that this consciousness is an emergent aspect of the self-organizing properties of biological life.
If we also concede that an indivisible whole contains, by definition, no separate parts and that a phenomenon can be assumed to be “real” only when it is “observed” phenomenon, we are led to more interesting conclusions. The indivisible whole whose existence is inferred in the results of the aspectual experiments that cannot in principle is itself the subject of scientific investigation. There is a simple reason that this is the case. Science can claim knowledge of physical reality only when the predictions of a physical theory are validated by experiment. Since the indivisible whole cannot be measured or observed, we confront as the “event horizon” or knowledge where science can say nothing about the actual character of this reality. Why this is so, is a property of the entire universe, then we must also conclude that an undivided wholeness exists on the most primary and basic level in all aspects of physical reality. What we are dealing within science per se, however, are manifestations of tis reality, which are invoked or “actualized” in making acts of observation or measurement. Since the reality that exists between the space-like separated regions is a whole whose existence can only be inferred in experience. As opposed to proven experiment, the correlations between the particles, and the sum of these parts, do not constitute the “indivisible” whole. Physical theory allows us to understand why the correlations occur. Nevertheless, it cannot in principle disclose or describe the actualized character of the indivisible whole.
The scientific implications to this extraordinary relationship between parts (qualia) and indivisible whole (the universe) are quite staggering. Our primary concern, however, is a new view of the relationship between mind and world that carries even larger implications in human terms. When factors into our understanding of the relationship between parts and wholes in physics and biology, then mind, or human consciousness, must be viewed as an emergent phenomenon in a seamlessly interconnected whole called the cosmos.
All that is required to embrace the alternative view of the relationship between mind and world that are consistent with our most advanced scientific knowledge is a commitment to metaphysical and epistemological realism and a willingness to follow arguments to their logical conclusions. Metaphysical realism assumes that physical reality or has an actual existence independent of human observers or any act of observation, epistemological realism assumes that progress in science requires strict adherence to scientific mythology, or to the rules and procedures for doing science. If one can accept these assumptions, most of the conclusions drawn should appear self-evident in logical and philosophical terms. Attributing any extra-scientific properties to the whole to understand is also not necessary and embrace the new relationship between part and whole and the alternative view of human consciousness that is consistent with this relationship. This is, in this that our distinguishing character between what can be “proven” in scientific terms and what can be reasonably “inferred” in philosophical terms based on the scientific evidence.
Moreover, advances in scientific knowledge rapidly became the basis for the creation of a host of new technologies. Yet those responsible for evaluating the benefits and risks associated with the use of these technologies, much less their potential impact on human needs and values, normally had expertise on only one side of a two-culture divide. Perhaps, more important, many of the potential threats to the human future-such as, to, environmental pollution, arms development, overpopulation, and spread of infectious diseases, poverty, and starvation-can be effectively solved only by integrating scientific knowledge with knowledge from the social sciences and humanities. We have not done so for a simple reason, the implications of the amazing new fact of nature sustaining the non-locality that cannot be properly understood without some familiarity wit the actual history of scientific thought. The intent is to suggest that what is most important about this back-ground can be understood in its absence. Those who do not wish to struggle with the small and perhaps, the fewer amounts of back-ground implications should feel free to ignore it. However, this material will be no more challenging as such, that the hope is that from those of which will find a common ground for understanding and that will meet again on this commonly functions in an effort to close the circle, resolves the equations of eternity and complete the universe to obtainably gain in its unification of which that holds within.
Another aspect of the evolution of a brain that allowed us to construct symbolic universes based on complex language system that is particularly relevant for our purposes concerns consciousness of self. Consciousness of self as an independent agency or actor is predicted on a fundamental distinction or dichotomy between this self and the other selves. Self, as it is constructed in human subjective reality, is perceived as having an independent existence and a self-referential character in a mental realm separately distinct from the material realm. It was, the assumed separation between these realms that led Descartes to posit his famous dualism in understanding the nature of consciousness in the mechanistic classical universe.
In a thought experiment, instead of bringing a course of events, as in a normal experiment, we are invited to imagine one. We may then be able to “see” that some result following, or tat some description is appropriate, or our inability to describe the situation may itself have some consequences. Thought experiments played a major role in the development of physics: For example, Galileo probably never dropped two balls of unequal weight from the leaning Tower of Pisa, in order to refute the Aristotelean view that a heavy body falls faster than a lighter one. He merely asked used to imagine a heavy body made into the shape of a dumbbell, and then connecting rod gradually thinner, until it is finally severed. The thing is one heavy body until the last moment and he n two light ones, but it is incredible that this final outline alters the velocity dramatically. Other famous examples include the Einstein-Podolsky-Rosen thought experiment. In the philosophy of personal identity, our apparent capacity to imagine ourselves surviving drastic changes of body, brain, and mind is a permanent source of difficulty. There is no consensus on the legitimate place of thought experiments, to substitute either for real experiment, or as a reliable device for discerning possibilities. Thought experiments are alike of one that dislikes and are sometimes called intuition pumps.
For familiar reasons, supposing that people are characterized by their rationality is common, and the most evident display of our rationality is our capacity to think. This is the rehearsal in the mind of what to say, or what to do. Not all thinking is verbal, since chess players, composers and painters all think, and there is no a priori reason that their deliberations should take any more verbal a form than this actions. It is permanently tempting to conceive of this activity in terms of the presence in the mind of elements of some language, or other medium that represents aspects of the world. Still, the model has been attacked, notably by Wittgenstein, as insufficient, since no such presence could carry a guarantee that the right use would be made of it. Such an inner present seems unnecessary, since an intelligent outcome might arise in principle weigh out it.
In the philosophy of mind as well as ethics the treatment of animals exposes major problems if other animals differ from human beings, how is the difference to be characterized: Do animals think and reason, or have thoughts and beliefs? In philosophers as different as Aristotle and Kant the possession of reason separates humans from animals, and alone allows entry to the moral community.
For Descartes, animals are mere machines and ee lack consciousness or feelings. In the ancient world the rationality of animals is defended with the example of Chrysippus’ dog. This animal, tracking a prey, comes to a cross-roads with three exits, and without pausing to pick-up the scent, reasoning, according to Sextus Empiricus. The animal went either by this road, or by this road, or by that, or by the other. However, it did not go by this or that, but he went the other way. The ‘syllogism of the dog’ was discussed by many writers, since in Stoic cosmology animals should occupy a place on the great chain of being somewhat below human beings, the only terrestrial rational agents: Philo Judaeus wrote a dialogue attempting to show again Alexander of Aphrodisias that the dog’s behaviour does no t exhibit rationality, but simply shows it following the scent, by way of response Alexander has the animal jump down a shaft (where the scent would not have lingered). Plutah sides with Philo, Aquinas discusses the dog and scholastic thought in general was quite favourable to brute intelligence (being made to stand trail for various offences in medieval times was common for animals). In the modern era Montaigne uses the dog to remind us of the frailties of human reason: Rorarious undertook to show not only that beasts are rational, but that they make better use of reason than people do. James the first of England defends the syllogising dog, and Henry More and Gassendi both takes issue with Descartes on that matter. Hume is an outspoken defender of animal cognition, but with their use of the view that language is the essential manifestation of mentality, animals’ silence began to count heavily against them, and they are completely denied thoughts by, for instance Davidson.
Dogs are frequently shown in pictures of philosophers, as their assiduity and fidelity are a symbol
The term instinct (Lat., instinctus, impulse or urge) implies innately determined behaviour, flexible to change in circumstance outside the control of deliberation and reason. The view that animals accomplish even complex tasks not by reason was common to Aristotle and the Stoics, and the inflexibility of their outline was used in defence of this position as early as Avicennia. A continuity between animal and human reason was proposed by Hume, and followed by sensationalist such as the naturalist Erasmus Darwin (1731-1802). The theory of evolution prompted various views of the emergence of stereotypical behaviour, and the idea that innate determinants of behaviour are fostered by specific environments is a guiding principle of ethology. In this sense that being social may be instinctive in human beings, and for that matter too reasoned on what we now know about the evolution of human language abilities, however, our real or actualized self is clearly not imprisoned in our minds.
It is implicitly a part of the larger whole of biological life, human observers its existence from embedded relations to this whole, and constructs its reality as based on evolved mechanisms that exist in all human brains. This suggests that any sense of the “otherness” of self and world be is an illusion, in that disguises of its own actualization are to find all its relations between the part that are of their own characterization. Its self as related to the temporality of being whole is that of a biological reality. It can be viewed, of course, that a proper definition of this whole must not include the evolution of the larger undissectible whole. Yet, the cosmos and unbroken evolution of all life, by that of the first self-replication molecule that was the ancestor of DNA. It should include the complex interactions that have proven that among all the parts in biological reality that any resultant of emerging is self-regulating. This, of course, is responsible to properties owing to the whole of what might be to sustain the existence of the parts.
Founded on complications and complex coordinate systems in ordinary language may be conditioned as to establish some developments have been descriptively made by its physical reality and metaphysical concerns. That is, that it is in the history of mathematics and that the exchanges between the mega-narratives and frame tales of religion and science were critical factors in the minds of those who contributed. The first scientific revolution of the seventeenth century, allowed scientists to better them in the understudy of how the classical paradigm in physical reality has marked results in the stark Cartesian division between mind and world that became one of the most characteristic features of Western thought. This is not, however, another strident and ill-mannered diatribe against our misunderstandings, but drawn upon equivalent self realization and undivided wholeness or predicted characterlogic principles of physical reality and the epistemological foundations of physical theory.
Scientific knowledge is an extension of ordinary language into greater levels of abstraction and precision through reliance upon geometry and numerical relationships. We imagine that the seeds of the scientific imagination were planted in ancient Greece. This, of course, opposes any other option but to speculate some displacement afar from the Chinese or Babylonian cultures. Partly because the social, political, and economic climates in Greece were more open in the pursuit of knowledge along with greater margins that reflect upon cultural accessibility. Another important factor was that the special character of Homeric religion allowed the Greeks to invent a conceptual framework that would prove useful in future scientific investigations. However, it was only after this inheritance from Greek philosophy was wedded to some essential feature of Judeo-Christian beliefs about the origin of the cosmos that the paradigm for classical physics emerged.
The Greek philosophers we now recognized as the originator’s scientific thoughts were oraclically mystic who probably perceived their world as replete with spiritual agencies and forces. The Greek religious heritage made it possible for these thinkers to attempt to coordinate diverse physical events within a framework of immaterial and unifying ideas. The fundamental assumption that there is a pervasive, underlying substance out of which everything emerges and into which everything returns are attributed to Thales of Miletos. Thales had apparently transcended to this conclusion out of the belief that the world was full of gods, and his unifying substance, water, was similarly charged with spiritual presence. Religion in this instance served the interests of science because it allowed the Greek philosophers to view “essences” underlying and unifying physical reality as if they were “substances.”
Nonetheless, the belief that the mind of God as the Divine Architect permeates the workings of nature. All of which, is the principle of scientific thought, as pronounced through Johannes Kepler, and subsequently to most contemporaneous physicists, as the consigned probability can feel of some discomfort, that in reading Kepler’s original manuscripts. Physics and metaphysics, astronomy and astrology, geometry and theology commingle with an intensity that might offend those who practice science in the modern sense of that word. “Physical laws,” wrote Kepler, “lie within the power of understanding of the human mind, God wanted us to perceive them when he created us in His image so that we may take part in His own thoughts . . . Our knowledge of numbers and quantities are the same as that of God’s, at least as far as we can understand something of it in this mortal life.”
The history of science grandly testifies to the manner in which scientific objectivity results in physical theories that must be assimilated into “customary points of view and forms of perception.” The framers of classical physics derived, like the rest of us there, “customary points of view and forms of perception” from macro-level visualized experience. Thus, the descriptive apparatus of visualizable experience became reflected in the classical descriptive categories.
A major discontinuity appears, however, as we moved from descriptive apparatus dominated by the character of our visualizable experience to a complete description of physical reality in relativistic and quantum physics. The actual character of physical reality in modern physics lies largely outside the range of visualizable experience. Einstein, was acutely aware of this discontinuity: “We have forgotten what features of the world of experience caused us to frame pre-scientific concepts, and we have great difficulty in representing the world of experience to ourselves without the spectacles of the old-established conceptual interpretation. There is the further difficulty that our language is compelled to work with words that are inseparably connected with those primitive concepts.”
It is time, for the religious imagination and the religious experience to engage the complementary truths of science in filling that which is silence with meaning. However, this does not mean that those who do not believe in the existence of God or Being should refrain in any sense for assessing the implications of the new truths of science. Understanding these implications does not require to some ontology, and is in no way diminished by the lack of ontology. And one is free to recognize a basis for an exchange between science and religion since one is free to deny that this basis exists-there is nothing in our current scientific world-view that can prove the existence of God or Being and nothing that legitimate any anthropomorphic conceptions of the nature of God or Being. The question of belief in onology remains what it has always been-a question, and the physical universe on the most basic level remains what has always been-a riddle. And the ultimate answer to the question and the ultimate meaning of the riddle are, and probably will always be, a mater of personal choice and conviction.
Our frame reference work is mostly to incorporate in an abounding set-class affiliation between mind and world, by that lay to some defining features and fundamental preoccupations, for which there is certainly nothing new in the suggestion that contemporary scientific world-view legitimates an alternate conception of the relationship between mind and world. The essential point of attention is that one of “consciousness” and remains in a certain state of our study.
But at the end of this, sometimes labourious journey that precipitate to some conclusion that should make the trip very worthwhile. Initiatory comments offer resistance in contemporaneous physics or biology for believing “I” in the stark Cartesian division between mind and world that some have rather aptly described as “the disease of the Western mind.” In addition, let us consider the legacy in Western intellectual life of the stark division between mind and world sanctioned by René Descartes.
Descartes, the father of modern philosophy, inasmuch as he made epistemological questions the primary and central questions of the discipline. But this is misleading for several reasons. In the first, Descartes conception of philosophy was very different from our own. The term “philosophy” in the seventeenth century was far more comprehensive than it is today, and embraced the whole of what we nowadays call natural science, including cosmology and physics, and subjects like anatomy, optics and medicine. Descartes reputation as a philosopher in his own time was based as much as anything on his contributions in these scientific areas. Secondly, even in those Cartesian writings that are philosophical in the modern academic sense, the e epistemological concerns are rather different from the conceptual and linguistic inquiries that characterize present-day theory of knowledge. Descartes saw the need to base his scientific system on secure metaphysical foundations: By “metaphysics” he meant that in the queries into God and the soul and usually all the first things to be discovered by philosophizing. Yet, he was quick to realize that there was nothing in this view that provided untold benefits between heaven and earth and united the universe in a shared and communicable frame of knowledge, it presented us with a view of physical reality that was totally alien from the world of everyday life. Even so, there was nothing in this view of nature that could explain or provide a foundation for the mental, or for all that of direct experience as distinctly human, with no ups, downs or any which ways of direction.
Following these fundamentals’ explorations that include questions about knowledge and certainty, but even here, Descartes is not primarily concerned with the criteria for knowledge claims, or with definitions of the epistemic concepts involved, as his aim is to provide a unified framework for understanding the universe. And with this, Descartes was convinced that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led him to invented algebraic geometry.
A scientific understanding to these ideas could be derived, as did that Descartes declared, that with the aid of precise deduction, and he also claimed that the contours of physical reality could be laid out in three-dimensional coordinates. Following the publication of Isaac Newton’s “Principia Mathematica” in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. And the dream that the entire physical world could be known and mastered through the extension and refinement of mathematical theory became the central feature and principle of scientific knowledge.
The radical separation between mind and nature formalized by Descartes served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanisms lacking any concerns about its spiritual dimension or ontological foundations. Meanwhile, attempts to rationalize, reconcile, or eliminate Descartes’s stark division between mind and matter became perhaps the most central feature of Western intellectual life.
As in the view of the relationship between mind and world sanctioned by classical physics and formalized by Descartes became a central preoccupation in Western intellectual life. And the tragedy of the Western mind is that we have lived since the seventeenth century with the prospect that the inner world of human consciousness and the outer world of physical reality are separated by an abyss or a void that cannot be bridged or to agree with reconciliation.
In classical physics, external reality consisted of inert and inanimate matter moving according to wholly deterministic natural laws, and collections of discrete atomized parts made up wholes. Classical physics was also premised, however, a dualistic conception of reality as consisting of abstract disembodied ideas existing in a domain separate form and superior to sensible objects and movements. The notion that the material world experienced by the senses was inferior to the immaterial world experienced by mind or spirit has been blamed for frustrating the progress of physics up too at least the time of Galileo. But in one very important respect, it also made the first scientific revolution possible. Copernicus, Galileo, Kepler, and Newton firmly believed that the immaterial geometrical and mathematical ideas that inform physical reality had a prior existence in the mind of God and that doing physics was a form of communion with these ideas.
The tragedy of the Western mind is a direct consequence of the stark Cartesian division between mind and world. This is the tragedy of the modern mind which “solved the riddle of the universe,” but only to replace it by another riddle: The riddle of itself. Yet, we discover the “certain principles of physical reality,” said Descartes, “not by the prejudices of the senses, but by rational analysis, which thus possess so great evidence that we cannot doubt of their truth.” Since the real, or that which actually remains external to ourselves, was in his view only that which could be represented in the quantitative terms of mathematics, Descartes concluded that all qualitative aspects of reality could be traced to the deceitfulness of the senses.
Given that Descartes distrusted the information from the senses to the point of doubting the perceived results of repeatable scientific experiments, how did he conclude that our knowledge of the mathematical ideas residing only in mind or in human subjectivity was accurate, much less the absolute truth? He did so by making a leap of faith-God constructed the world, said Descartes, according to the mathematical ideas that our minds could uncover in their pristine essence. The truths of classical physics as Descartes viewed them were quite literally “revealed” truths, and it was this seventeenth-century metaphysical presupposition that became in the history of science what is termed the “hidden ontology of classical epistemology.” Descartes lingers in the widespread conviction that science does not provide a “place for man” or for all that we know as distinctly human in subjective reality.
The historical notion in the unity of consciousness has had an interesting history in philosophy and psychology. Taking Descartes to be the first major philosopher of the modern period, the unity of consciousness was central to the study of the mind for the whole of the modern period until the 20th century. The notion figured centrally in the work of Descartes, Leibniz, Hume, Reid, Kant, Brennan, James, and, in most of the major precursors of contemporary philosophy of mind and cognitive psychology. It played a particularly important role in Kant's work.
A couple of examples will illustrate the role that the notion of the unity of consciousness played in this long literature. Consider a classical argument for dualism (the view that the mind is not the body, indeed is not made out of matter at all). It starts like this: When I consider the mind, which is to say of myself, as far as I am only a thinking thing, I cannot distinguish in myself any parts, but apprehend myself to be clearly one and entire.
Descartes then asserts that if the mind is not made up of parts, it cannot consist of matter, presumably because, as he saw it, anything material has parts. He then goes on to say that this would be enough to prove dualism by itself, had he not already proved it elsewhere. It is in the unified consciousness that I have of myself.
Here is another, more elaborate argument based on unified consciousness. The conclusion will be that any system of components could never achieve unified consciousness acting in concert. William James' well-known version of the argument starts as follows: Take a sentence of a dozen words, take twelve men, and to each word. Then stand the men in a row or jam them in a bunch, and let each think of his word as intently as he will; Nowhere will there be a consciousness of the whole sentence.
James generalizes this observation to all conscious states. To get dualism out of this, we need to add a premise: That if the mind were made out of matter, conscious states would have to be distributed over some group of components in some relevant way. Nevertheless, this thought experiment is meant to show that conscious states cannot be so distributed. Therefore, the conscious mind is not made out of matter. Calling the argument that James is using is the Unity Argument. Clearly, the idea that our consciousness of, here, the parts of a sentence are unified is at the centre of the Unity Argument. Like the first, this argument goes all the way back to Descartes. Versions of it can be found in thinkers otherwise as different from one another as Leibniz, Reid, and James. The Unity Argument continued to be influential into the 20th century. That the argument was considered a powerful reason for concluding that the mind is not the body is illustrated in a backhanded way by Kant's treatment of it (as he found it in Descartes and Leibniz, not James, of course).
Kant did not think that we could uncover anything about the nature of the mind, including whether nor is it made out of matter. To make the case for this view, he had to show that all existing arguments that the mind is not material do not work and he set out to do just this in the chapter in the Critique of Pure Reason on the Paralogisms of Pure Reason (1781), paralogisms are faulty inferences about the nature of the mind. The Unity Argument is the target of a major part of that chapter; if one is going to show that we cannot know what the mind is like, we must dispose of the Unity Argument, which purports to show that the mind is not made out of matter. Kant's argument that the Unity Argument does not support dualism is simple. He urges that the idea of unified consciousness being achieved by something that has no parts or components are no less mysterious than its being achieved by a system of components acting together. Remarkably enough, though no philosopher has ever met this challenge of Kant's and no account exists of what an immaterial mind not made out of parts might be like, philosophers continued to rely on the Unity Argument until well into the 20th century. It may be a bit difficult for us to capture this now but the idea any system of components, and for an even stronger reason might not realize that merge with consciousness, that each system of material components, had a strong intuitive appeal for a long time.
The notion that consciousness agrees to unification and was in addition central to one of Kant's own famous arguments, his ‘transcendental deduction of the categories’. In this argument, boiled down to its essentials, Kant claims that to tie various objects of experience together into a single unified conscious representation of the world, something that he simply assumed that we could do, we could probably apply certain concepts to the items in question. In particular we have to apply concepts from each of four fundamental categories of concept: Quantitative, qualitative, relational, and what he called ‘modal’ concepts. Modal concept’s concern of whether an item might exist, does exist, or must exist. Thus, the four kinds of concept are concepts for how many units, what features, what relations to other objects, and what existence status is represented in an experience.
It was relational conceptual representation that most interested Kant and of relational concepts, he thought the concept of cause-and-effect to be by far the most important. Kant wanted to show that natural science (which for him meant primarily physics) was genuine knowledge (he thought that Hume's sceptical treatment of cause and effect relations challenged this status). He believed that if he could prove that we must tie items in our experience together causally if we are to have a unified awareness of them, he would have put physics back on "the secure path of a science.” The details of his argument have exercised philosophers for more than two hundred years. We will not go into them here, but the argument illustrates how central the notion of the unity of consciousness was in Kant's thinking about the mind and its relation to the world.
Although the unity of consciousness had been at the centre of pre-20th century research on the mind, early in the 20th century the notion almost disappeared. Logical atomism in philosophy and behaviourism in psychology were both unsympathetic to the notion. Logical atomism focussed on the atomic elements of cognition (sense data, simple propositional judgments, etc.), rather than on how these elements are tied together to form a mind. Behaviourism urged that we focus on behaviour, the mind being alternatively myth or something otherwise that we cannot and do not need of studying the mysteriousness of science, from which brings meaning and purpose to humanity. This attitude extended to consciousness, of course. The philosopher Daniel Dennett summarizes the attitude prevalent at the time this way: Consciousness may be the last bastion of occult properties, epiphenomena, immeasurable subjective states-in short, the one area of mind best left to the philosophers. Let them make fools of themselves trying to corral the quicksilver of ‘phenomenology’ into a respectable theory.
The unity of consciousness next became an object of serious attention in analytic philosophy only as late as the 1960s. In the years since, new work has appeared regularly. The accumulated literature is still not massive but the unity of consciousness has again become an object of serious study. Before we examine the more recent work, we need to explicate the notion in more detail than we have done so far and introduce some empirical findings. Both are required to understand recent work on the issue.
To expand on our earlier notion of the unity of consciousness, we need to introduce a pair of distinctions. Current works on consciousness labours under a huge, confusing terminology. Different theorists exchange dialogue over the excess consciousness, phenomenal consciousness, self-consciousness, simple consciousness, creature consciousness, states consciousness, monitoring consciousness, awareness as equated with consciousness, awareness distinguished from consciousness, higher orders thought, higher orders experience, qualia, the felt qualities of representations, consciousness as displaced perception, . . . and on and on and on. We can ignore most of this profusion but we do need two distinctions: between consciousness of objects and consciousness of our representations of objects, and between consciousness of representations and consciousness of self.
It is very natural to think of self-consciousness or, cognitive state more accurately, as a set of cognitive states. Self-knowledge is an example of such a cognitive state. There are plenty of things that I know bout self. I know the sort of thing I am: a human being, a warm-blooded rational animal with two legs. I know of many properties and much of what is happening to me, at both physical and mental levels. I also know things about my past, things I have done and that of whom I have been with other people I have met. But I have many self-conscious cognitive states that are not instances of knowledge. For example, I have the capacity to plan for the future-to weigh up possible courses of action in the light of goals, desires, and ambitions. I am capable of ca certain type of moral reflection, tide to moral self-and understanding and moral self-evaluation. I can pursue questions like, what sort of person I am? Am I the sort of person I want to be? Am I the sort of individual that I ought to be? This is my ability to think about myself. Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employing in my thought about other people and other objects.
When I say that I am a self-conscious creature, I am saying that I can do all these things. But what do they have in common? Could I lack some and still be self-conscious? These are central questions that take us to the heart of many issues in metaphysics, the philosophy of mind, and the philosophy of psychology.
Even so, with the range of putatively self-conscious cognitive states, one might naturally assume that there is a single ability that all presuppose. This is my ability to think about myself. I can only have knowledge about myself if I have beliefs about myself, and I can only have beliefs about myself if I can entertain thoughts about myself. The same can be said for autobiographical memories and moral self-understanding.
The proposing account would be on par with other noted examples of the deflationary account of self-consciousness. If, in at all, a straightforward explanation to what makes those of the “self contents” immune to error through misidentification concerning the semantics of self, then it seems fair to say that the problem of self-consciousness has been dissolved, at least as much as solved.
This proposed account would be on a par with other noted examples as such as the redundancy theory of truth. That is to say, the redundancy theory or the deflationary view of truth claims that the predicate ‘ . . . true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophic enquiry. The approach admits of different versions, but centres on the pints (1) that ‘it is true that p’ says no more nor less than ‘p’ (so, redundancy”) (2) that in less direct context, such as ‘everything he said was true’, or ‘all logical consequences of true propositions as true’, the predicated functions as a device enabling us to generalize rather than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from true propositions. For example, its translation is to infer that: (∀p, q)(p & p ➝ q ➝ q)’ where there is no use of a notion of true statements. It is supposed in classical (two-valued) logic that each statement has one of these values, and not as both. A statement is then false if and only if it is not true. The basis of this scheme is that to each statement there corresponds a determinate truth condition, or way the world must be for it to be true, if this condition obtains the statement is true, and otherwise false. Statements may indeed be felicitous or infelicitous in other dimensions (polite, misleading, apposite, witty, etc.) but truth is the central normative notion governing assertion. Considerations of vagueness may introduce greys into this black-and-white schemes. For the issue of whether falsity is the only way of failing to be true. The view, if a language is provided with a truth definition, according to the semantic theory of th truth is a sufficiently characterization of its concept of truth, there is no further philosophical chapter to write about truth itself or truth as shared across different languages. The view is similar to that of the disquotational theory
There are technical problems in interpreting all uses of the notion of truth in such ways, but they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as . . . ‘science aims at the truth’ or ‘truth is a norm governing discourse. Indeed, postmodernist writing frequently advocates that we must abandon such norms, along with a discredited ‘objective’ concept ion of truth. But perhaps, we can have the norms even when objectivity is problematic, since they can be framed within mention of truth: Science wants to be so that whenever science holds that ‘p’, when ‘p’‘. Discourse is to be regulated by the principle that it is wrong to assert ‘p’: When not-p.
It is important to stress how redundancy or the deflationary theory of self-consciousness, and any theory of consciousness that accords a serious role in self-consciousness to mastery of the semantics of the first-person pronoun, is motivated by an important principle that ha governed much of the development of analytical philosophy. This is the principle that the philosophical analysis of thought can only proceed through the philosophical analysis of language:
Thoughts differ from all else that is aid to be among the contents of the mind in being wholly communicable: It is of the essence of thought that I can convey to you the very thought that I have, as opposed to being able to tell you merely something about what my thought is like. It is of the essence of thought not merely to be communicable, but to be communicable, without residue, by means of language. In order to understand thought, it is necessary, therefore, to understand the means by which thought is expressed. We communicate thought by means of language because we have an implicit understanding of the workings of language, that is, of the principles governing the use of language, it is these principles, which relate to what is open to view in the employment of language, unaided by any supposed contact between mind and the senses that they carry. In order to analyses thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp. (Dummett, 1978)
So how can such thoughts be entertained by a thinker incapable of reflexively referring to himself as English speakers do with the first-person pronoun be plausibly ascribed thought with first-person contents? The thought that, despite all this, there are in fact first-person contents that do not presuppose mastery of the first-person pronoun is at the core of the functionalist theory of self-reference and first-person belief.
The best developed functionalist theory of self-reference has been deployed by Hugh Mellor (1988-1989). The basic phenomenon he is interested in explaining is what it is for a creature to have what he terms as subjective belief, which is to say, a belief whose content is naturally expressed by a sentence in the first-person singular and the present tense. Mellor starts from the functionalist premise that beliefs are causal functions from desires to actions. It is, of course, the emphasis on causal links between belief and action that make it plausible to think that belief might be independent of language and conscious belief, since “agency entails neither linguistic ability nor conscious belief. The idea that beliefs are causal functions from desires to actions can be deployed to explain the content of a give n belief through which the equation of truth conditions and utility conditions, where utility conditions are those in which the actions caused by the conjunction of that belief with a single desire result in the satisfaction of that desire. To expound forthwith, consider a creature ‘x’ who is hungry and has a desire for food at time ‘t’. That creature has a token belief b/(p) that conjoins with its desire for food to cause it to eat what is in front of it at that time. The utility condition of that belief is that there is food in front of it at that time. The utility condition of that belief is that there is food in from it of ‘x’ at that time. Moreover, for b/(p) to cause ‘x’ to eat what is in front of it at ‘t’, b/(p) must be a belief that ‘x’ has at ‘t’. Therefore, the utility/truth conditions of b/(p) is that whatever creature has this belief faces food when it is in fact facing food. And a belief with this content is, of course, the subjective belief whose natural linguistic expression would be “I am facing food now.” On the other hand, however, a belief that would naturally be expressed wit these words can be ascribed to a non-linguistic creature, because what makes it the belief that it is depending not on whether it can be linguistically expressed but on how it affects behaviour.
For in order to believe ‘p’, I need only be disposed to eat what I face if I feel hungry: A disposition which causal contiguity ensures that only my simultaneous hunger can provide, and only into making me eat, and only then. That’s what makes my belief refers to me and to when I have it. And that’s why I need have no idea who I am or what the time is, no concept of the self or of the present, no implicit or explicit grasp of any “sense” of “I” or “now,” to fix the reference of my subjective belies: Causal contiguity fixes them for me.
Causal contiguity, according to explanation may well be to why no internal representation of the self is required, even at what other philosophers has called the subpersonal level. Mellor believes that reference to distal objects can take place when in internal state serves as a causal surrogate for the distal object, and hence as an internal representation of that object. No such causal surrogate, and hence no such internal representation, is required in the case of subjective beliefs. The relevant casual components of subjective beliefs are the believer and the time.
The necessary contiguity of cause and effect is also the key to =the functionalist account of self-reference in conscious subjective belief. Mellor adopts a relational theory of consciousness, equating conscious beliefs with second-order beliefs to the effect that one is having a particular first-order subjective belief, it is, simply a fact about our cognitive constitution that these second-order beliefs are reliably, though of course fallibly, generated so that we tend to believe that we believe things that we do in fact believe.
The contiguity law in Leibniz, extends the principles that there are no discontinuous changes in nature”: “natura non facit saltum, nature makes no leaps.” Leibniz was able to use the principle to criticize the mechanical system of Descartes, which would imply such leaps in some circumstances, and to criticize contemporary atomism, which implied discontinuous changes of density at the edge of an atom. However, according to Hume the contiguity of evens is an important element in our interpretation of their conjunction for being causal.
Others attending to the functionalist point of view are it’s the advocate’s Putnam and Stellars, and its guiding principle is that we can define mental states by a triplet of relations: What typically cayuses them, what affects they have on other mental states and what affects they have on behaviour. The definition need not take the form of a simple analysis, but if we could write down the totality of axioms, or postulates, or platitudes that govern our theories about what things are apt to cause (for example) a belief state, what effects it would have on a variety of other mental states, and what effect it us likely to have on behaviour, then we would have done all that is needed to make the state a proper theoretical notion. It would be implicitly defined by these theses. Functionalism is often compared with descriptions of a computer, since according to it mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlying hardware ee or “realization” of the program the machine is running. The principal advantages of functionalism include its fit with the way we know of mental states both of ourselves and others are via their effects on behaviour and other mental states. As with behaviourism, critics charge that structurally complex items that do not bear mental states might nevertheless imitate the functions that are cited. According to this criticism functionalism is too generous, and would count too many things as having minds. It is also queried whether functionalism is too parochial, able to see mental similarities only when there is causal similarity, when our actual practices of interpretation enable us to ascribe thoughts and desires to persons whose causal structure may be rather different from our own. It may then seem as though beliefs and desires can be variably realized in causal architectures, just as much as they can be in different neurophysiological stares.
Nevertheless, we are confronted with the range of putatively self-conscious cognitive states, one might assume that there is a single ability that is presupposed. This is my ability to think about myself, and I can only have knowledge about myself if I have beliefs about myself, and I can only have beliefs about myself if I can entertain thoughts about myself. The same can be said for autographical memories and moral self-understanding. These are ways of thinking about myself.
Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employ in my thoughts about other people and other objects. My knowledge that I am a human being deploys certain conceptual abilities that I can also deploy in thinking that you are a human being. The same holds when I congratulate myself for satisfying the exacting moral standards of autonomous moral agencies. This involves concepts and descriptions that can apply equally to themselves and to others. On the other hand, when I think about myself, I am also putting to work an ability that I cannot put to work in thinking about other people and other objects. This is precisely the ability to apply those concepts and descriptions to myself. It has become common to refer to this ability as the ability to entertain “I’-thoughts.
Nonetheless, both subject and object, either mind or matter, are real or both are unreal, imaginary. The assumption of just an illusory subject or illusory object leads to dead-ends and to absurdities. This would entail an extreme form of skepticism, wherein everything is relative or subjective and nothing could be known for sure. This is not only devastating for the human mind, but also most ludicrous.
Does this leave us with the only option, that both, subject and objects are alike real? That would again create a real dualism, which we realized, is only created in our mind. So, what part of this dualism is not real?
To answer this, we have first to inquire into the meaning of the term "real.” Reality comes from the Latin word "realitas,” which could be literally translated by "thing-hood.” "Res" does not only have the meaning of a material thing.” "Res" can have a lot of different meanings in Latin. Most of them have little to do with materiality, e.g., affairs, events, business, a coherent collection of any kind, situation, etc. These so-called simulative terms are always subjective, and therefore related to the way of thinking and feeling of human beings. Outside of the realm of human beings, reality has no meaning at all. Only in the context of conscious and rational beings does reality become something meaningful. Reality is the whole of the human affairs insofar as these are related to our world around us. Reality is never the bare physical world, without the human being. Reality is the totality of human experience and thought in relation to an objective world.
Now this is the next aspect we have to analyse. Is this objective world, which we encounter in our experience and thought, something that exists on its own or is it dependent on our subjectivity? That the subjective mode of our consciousness affects the perceptions of the objective world is conceded by most of the scientists. Nevertheless, they assume a real and objective world, that would even exist without a human being alive or observing it. One way to handle this problem is the Kantian solution of the "thing-in-itself," that is inaccessible to our mind because of mind's inherent limitations. This does not help us very much, but just posits some undefinable entity outside of our experience and understanding. Hegel, on the other side, denied the inaccessibility of the "thing-in-itself" and thought, that knowledge of the world as it is in itself is attainable, but only by "absolute knowing" the highest form of consciousness.
One of the most persuasive proofs of an independent objective world, is the following thesis by science: If we put a camera into a landscape, where no human beings are present, and when we leave this place and let the camera take some pictures automatically through a timer, and when we come back some days later to develop the pictures, we will find the same picture of the landscape as if we had taken the picture ourselves. Also, common-sense tells us: if we wake up in the morning, it is highly probable, even sure, that we find ourselves in the same environment, without changes, without things having left their places uncaused.
Is this empirical argument sufficient to persuade even the most sceptical thinker, which there is an objective world out there? Hardly. If a sceptic nonetheless tries to uphold the position of a solipsistic monism, then the above-mentioned argument would only be valid, if the objects out there were assumed to be subjective mental constructs. Not even Berkeley assumed such an extreme position. His immaterialism was based on the presumption, that the world around us is the object of God's mind, that means, that all the objects are ideas in a universal mind. This is more persuasive. We could even close the gap between the religious concept of "God" and the philosophical concept by relating both of them to the modern quantum physical concept of a vacuum. All have one thing in common: there must be an underlying reality, which contains and produces all the objects. This idea of an underlying reality is interestingly enough a continuous line of thought throughout the history of mankind. Almost every great philosopher or every great religion assumed some kind of supreme reality. I deal with this idea in my historical account of mind's development.
We're still stuck with the problem of subject and object. If we assume, that there may be an underlying reality, neither physical nor mental, neither object nor subject, but producing both aspects, we end up with the identity of subject and object. So long as there is only this universal "vacuum,” nothing is yet differentiated. Everything is one and the same. By a dialectical process of division or by random fluctuations of the vacuum, elementary forms are created, which develop into more complex forms and finally into living beings with both a mental and a physical aspect. The only question to answer is, how these two aspects were produced and developed. Maybe there are an infinite numbers of aspects, but only two are visible to us, such as Spinoza postulated it. Also, since the mind does not evolve out of matter, there must have been either a concomitant evolution of mind and matter or matter has evolved whereas mind has not. Consequently mind is valued somehow superiorly to matter. Since both are aspects of one reality, both are alike significant. Science conceives the whole physical world and the human beings to have evolved gradually from an original vacuum state of the universe (singularity). So, has mind just popped into the world at some time in the past, or has mind emerged from the complexity of matter? The latter are not sustainable, and this leaves us with the possibility, that the other aspect, mind, has different attributes and qualities. This could be proven empirically. We do not believe, that our personality is something material, that our emotions, our love and fear are of a physical nature. The qualia and properties of consciousness are completely different from the properties of matter as science has defined it. By the very nature and essence of each aspect, we can assume therefore a different dialectical movement. Whereas matter is by the very nature of its properties bound to evolve gradually and existing in a perpetual movement and change, mind, on the other hand, by the very nature of its own properties, is bound to a different evolution and existence. Mind as such has not evolved. The individualized form of mind in the human body, that is, the subject, can change, although in different ways than matter changes. Both aspects have their own sets of laws and patterns. Since mind is also non-local, it comprises all individual minds. Actually, there is only one consciousness, which is only artificially split into individual minds. That's because of the connection with brain-organs, which are the means of manifestation and expression for consciousness. Both aspects are interdependent and constitute the world and the beings as we know them.
Scientific knowledge is an extension of ordinary language into greater levels of abstraction and precision through reliance upon geometry and numerical relationships. We imagine that the seeds of the scientific imagination were
planted in ancient Greece. This, of course, opposes any other option but to speculate some displacement afar from the Chinese or Babylonian cultures. Partly because the social, political, and economic climates in Greece were more open in the pursuit of knowledge along with greater margins that reflect upon cultural accessibility. Another important factor was that the special character of Homeric religion allowed the Greeks to invent a conceptual framework that would prove useful in future scientific investigations. But it was only after this inheritance from Greek philosophy was wedded to some essential feature of Judeo-Christian beliefs about the origin of the cosmos that the paradigm for classical physics emerged.
The Greek philosophers we now recognized as the originator’s scientific thoughts were oraclically mystic who probably perceived their world as replete with spiritual agencies and forces. The Greek religious heritage made it possible for these thinkers to attempt to coordinate diverse physical events within a framework of immaterial and unifying ideas. The fundamental assumption that there is a pervasive, underlying substance out of which everything emerges and into which everything returns are attributed to Thales of Miletos. Thales had apparently transcended to this conclusion out of the belief that the world was full of gods, and his unifying substance, water, was similarly charged with spiritual presence. Religion in this instance served the interests of science because it allowed the Greek philosophers to view “essences” underlying and unifying physical reality as if they were “substances.”
Nonetheless, the belief that the mind of God as the Divine Architect permeates the workings of nature. All of which, is the principle of scientific thought, as pronounced through Johannes Kepler, and subsequently to most contemporaneous physicists, as the consigned probability can feel of some discomfort, that in reading Kepler’s original manuscripts. Physics and metaphysics, astronomy and astrology, geometry and theology commingle with an intensity that might offend those who practice science in the modern sense of that word. “Physical laws,” wrote Kepler, “lie within the power of understanding of the human mind, God wanted us to perceive them when he created us in His image so that we may take part in His own thoughts . . . Our knowledge of numbers and quantities are the same as that of God’s, at least as far as we can understand something of it in this mortal life.”
The history of science grandly testifies to the manner in which scientific objectivity results in physical theories that must be assimilated into “customary points of view and forms of perception.” The framers of classical physics derived, like the rest of us there, “customary points of view and forms of perception” from macro-level visualized experience. Thus, the descriptive apparatus of visualizable experience became reflected in the classical descriptive categories.
A major discontinuity appears, however, as we moved from descriptive apparatus dominated by the character of our visualizable experience to a complete description of physical reality in relativistic and quantum physics. The actual character of physical reality in modern physics lies largely outside the range of visualizable experience. Einstein, was acutely aware of this discontinuity: “We have forgotten what features of the world of experience caused us to frame pre-scientific concepts, and we have great difficulty in representing the world of experience to ourselves without the spectacles of the old-established conceptual interpretation. There is the further difficulty that our language is compelled to work with words that are inseparably connected with those primitive concepts.”
It is time, for the religious imagination and the religious experience to engage the complementary truths of science in filling that which is silence with meaning. However, this does not mean that those who do not believe in the existence of God or Being should refrain in any sense for assessing the implications of the new truths of science. Understanding these implications does not require to some ontology, and is in no way diminished by the lack of ontology. And one is free to recognize a basis for an exchange between science and religion since one is free to deny that this basis exists-there is nothing in our current scientific world-view that can prove the existence of God or Being and nothing that legitimate any anthropomorphic conceptions of the nature of God or Being. The question of belief in onology remains what it has always been-a question, and the physical universe on the most basic level remains what has always been-a riddle. And the ultimate answer to the question and the ultimate meaning of the riddle are, and probably will always be, a mater of personal choice and conviction.
Our frame reference work is mostly to incorporate in an abounding set-class affiliation between mind and world, by that lay to some defining features and fundamental preoccupations, for which there is certainly nothing new in the suggestion that contemporary scientific world-view legitimates an alternate conception of the relationship between mind and world. The essential point of attention is that one of “consciousness” and remains in a certain state of our study.
But at the end of this, sometimes labourious journey that precipitate to some conclusion that should make the trip very worthwhile. Initiatory comments offer resistance in contemporaneous physics or biology for believing “I” in the stark Cartesian division between mind and world that some have rather aptly described as “the disease of the Western mind.” In addition, let us consider the legacy in Western intellectual life of the stark division between mind and world sanctioned by René Descartes.
Descartes, the father of modern philosophy, inasmuch as he made epistemological questions the primary and central questions of the discipline. But this is misleading for several reasons. In the first, Descartes conception of philosophy was very different from our own. The term “philosophy” in the seventeenth century was far more comprehensive than it is today, and embraced the whole of what we nowadays call natural science, including cosmology and physics, and subjects like anatomy, optics and medicine. Descartes reputation as a philosopher in his own time was based as much as anything on his contributions in these scientific areas. Secondly, even in those Cartesian writings that are philosophical in the modern academic sense, the e epistemological concerns are rather different from the conceptual and linguistic inquiries that characterize present-day theory of knowledge. Descartes saw the need to base his scientific system on secure metaphysical foundations: By “metaphysics” he meant that in the queries into God and the soul and usually all the first things to be discovered by philosophizing. Yet, he was quick to realize that there was nothing in this view that provided untold benefits between heaven and earth and united the universe in a shared and communicable frame of knowledge, it presented us with a view of physical reality that was totally alien from the world of everyday life. Even so, there was nothing in this view of nature that could explain or provide a foundation for the mental, or for all that of direct experience as distinctly human, with no ups, downs or any which ways of direction.
Following these fundamentals’ explorations that include questions about knowledge and certainty, but even here, Descartes is not primarily concerned with the criteria for knowledge claims, or with definitions of the epistemic concepts involved, as his aim is to provide a unified framework for understanding the universe. And with this, Descartes was convinced that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led him to invented algebraic geometry.
A scientific understanding to these ideas could be derived, as did that Descartes declared, that with the aid of precise deduction, and he also claimed that the contours of physical reality could be laid out in three-dimensional coordinates. Following the publication of Isaac Newton’s “Principia Mathematica” in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. And the dream that the entire physical world could be known and mastered through the extension and refinement of mathematical theory became the central feature and principle of scientific knowledge.
The radical separation between mind and nature formalized by Descartes served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanisms lacking any concerns about its spiritual dimension or ontological foundations. Meanwhile, attempts to rationalize, reconcile, or eliminate Descartes’s stark division between mind and matter became perhaps the most central feature of Western intellectual life.
As in the view of the relationship between mind and world sanctioned by classical physics and formalized by Descartes became a central preoccupation in Western intellectual life. And the tragedy of the Western mind is that we have lived since the seventeenth century with the prospect that the inner world of human consciousness and the outer world of physical reality are separated by an abyss or a void that cannot be bridged or to agree with reconciliation.
In classical physics, external reality consisted of inert and inanimate matter moving according to wholly deterministic natural laws, and collections of discrete atomized parts made up wholes. Classical physics was also premised, however, a dualistic conception of reality as consisting of abstract disembodied ideas existing in a domain separate form and superior to sensible objects and movements. The notion that the material world experienced by the senses was inferior to the immaterial world experienced by mind or spirit has been blamed for frustrating the progress of physics up too at least the time of Galileo. But in one very important respect, it also made the first scientific revolution possible. Copernicus, Galileo, Kepler, and Newton firmly believed that the immaterial geometrical and mathematical ideas that inform physical reality had a prior existence in the mind of God and that doing physics was a form of communion with these ideas.
The tragedy of the Western mind is a direct consequence of the stark Cartesian division between mind and world. This is the tragedy of the modern mind which “solved the riddle of the universe,” but only to replace it by another riddle: The riddle of itself. Yet, we discover the “certain principles of physical reality,” said Descartes, “not by the prejudices of the senses, but by rational analysis, which thus possess so great evidence that we cannot doubt of their truth.” Since the real, or that which actually remains external to ourselves, was in his view only that which could be represented in the quantitative terms of mathematics, Descartes concluded that all qualitative aspects of reality could be traced to the deceitfulness of the senses.
Given that Descartes distrusted the information from the senses to the point of doubting the perceived results of repeatable scientific experiments, how did he conclude that our knowledge of the mathematical ideas residing only in mind or in human subjectivity was accurate, much less the absolute truth? He did so by making a leap of faith-God constructed the world, said Descartes, according to the mathematical ideas that our minds could uncover in their pristine essence. The truths of classical physics as Descartes viewed them were quite literally “revealed” truths, and it was this seventeenth-century metaphysical presupposition that became in the history of science what is termed the “hidden ontology of classical epistemology.” Descartes lingers in the widespread conviction that science does not provide a “place for man” or for all that we know as distinctly human in subjective reality.
The historical notion in the unity of consciousness has had an interesting history in philosophy and psychology. Taking Descartes to be the first major philosopher of the modern period, the unity of consciousness was central to the study of the mind for the whole of the modern period until the 20th century. The notion figured centrally in the work of Descartes, Leibniz, Hume, Reid, Kant, Brennan, James, and, in most of the major precursors of contemporary philosophy of mind and cognitive psychology. It played a particularly important role in Kant's work.
A couple of examples will illustrate the role that the notion of the unity of consciousness played in this long literature. Consider a classical argument for dualism (the view that the mind is not the body, indeed is not made out of matter at all). It starts like this: When I consider the mind, which is to say of myself, as far as I am only a thinking thing, I cannot distinguish in myself any parts, but apprehend myself to be clearly one and entire.
Descartes then asserts that if the mind is not made up of parts, it cannot consist of matter, presumably because, as he saw it, anything material has parts. He then goes on to say that this would be enough to prove dualism by itself, had he not already proved it elsewhere. It is in the unified consciousness that I have of myself.
Here is another, more elaborate argument based on unified consciousness. The conclusion will be that any system of components could never achieve unified consciousness acting in concert. William James' well-known version of the argument starts as follows: Take a sentence of a dozen words, take twelve men, and to each word. Then stand the men in a row or jam them in a bunch, and let each think of his word as intently as he will; Nowhere will there be a consciousness of the whole sentence.
James generalizes this observation to all conscious states. To get dualism out of this, we need to add a premise: That if the mind were made out of matter, conscious states would have to be distributed over some group of components in some relevant way. Nevertheless, this thought experiment is meant to show that conscious states cannot be so distributed. Therefore, the conscious mind is not made out of matter. Calling the argument that James is using is the Unity Argument. Clearly, the idea that our consciousness of, here, the parts of a sentence are unified is at the centre of the Unity Argument. Like the first, this argument goes all the way back to Descartes. Versions of it can be found in thinkers otherwise as different from one another as Leibniz, Reid, and James. The Unity Argument continued to be influential into the 20th century. That the argument was considered a powerful reason for concluding that the mind is not the body is illustrated in a backhanded way by Kant's treatment of it (as he found it in Descartes and Leibniz, not James, of course).
Kant did not think that we could uncover anything about the nature of the mind, including whether nor is it made out of matter. To make the case for this view, he had to show that all existing arguments that the mind is not material do not work and he set out to do just this in the chapter in the Critique of Pure Reason on the Paralogisms of Pure Reason (1781), paralogisms are faulty inferences about the nature of the mind. The Unity Argument is the target of a major part of that chapter; if one is going to show that we cannot know what the mind is like, we must dispose of the Unity Argument, which purports to show that the mind is not made out of matter. Kant's argument that the Unity Argument does not support dualism is simple. He urges that the idea of unified consciousness being achieved by something that has no parts or components are no less mysterious than its being achieved by a system of components acting together. Remarkably enough, though no philosopher has ever met this challenge of Kant's and no account exists of what an immaterial mind not made out of parts might be like, philosophers continued to rely on the Unity Argument until well into the 20th century. It may be a bit difficult for us to capture this now but the idea any system of components, and for an even stronger reason might not realize that merge with consciousness, that each system of material components, had a strong intuitive appeal for a long time.
The notion that consciousness agrees to unification and was in addition central to one of Kant's own famous arguments, his ‘transcendental deduction of the categories’. In this argument, boiled down to its essentials, Kant claims that to tie various objects of experience together into a single unified conscious representation of the world, something that he simply assumed that we could do, we could probably apply certain concepts to the items in question. In particular we have to apply concepts from each of four fundamental categories of concept: Quantitative, qualitative, relational, and what he called ‘modal’ concepts. Modal concept’s concern of whether an item might exist, does exist, or must exist. Thus, the four kinds of concept are concepts for how many units, what features, what relations to other objects, and what existence status is represented in an experience.
It was relational conceptual representation that most interested Kant and of relational concepts, he thought the concept of cause-and-effect to be by far the most important. Kant wanted to show that natural science (which for him meant primarily physics) was genuine knowledge (he thought that Hume's sceptical treatment of cause and effect relations challenged this status). He believed that if he could prove that we must tie items in our experience together causally if we are to have a unified awareness of them, he would have put physics back on "the secure path of a science.” The details of his argument have exercised philosophers for more than two hundred years. We will not go into them here, but the argument illustrates how central the notion of the unity of consciousness was in Kant's thinking about the mind and its relation to the world.
Consciousness may possibly be the most challenging and pervasive source of problems in the whole of philosophy. Our own consciousness seems to be the most basic fact confronting us, yet it is almost impossible to say what consciousness is. Is mine like your? Is ours like that of animals? Might machines come to have consciousness? Is it possible for there to be disembodied consciousness? Whatever complex biological and neural processes go backstage, it is my consciousness that provides the theatre where my experiences and thoughts have their existence, where my desires are felt and where my intentions are formed. But then how am I to conceive the “I,” or self that is the spectator of this theatre? One of the difficulties in thinking about consciousness is that the problems seem not to be scientific ones: Leibniz remarked that if we could construct a machine that could think and feel, and blow it up to the size of a mill and thus be able to examine its working parts as thoroughly as we pleased, we would still not find consciousness and draw the conclusion that consciousness resides in simple subjects, not complex ones. Eve n if we are convinced that consciousness somehow emerges from the complexity of brain functioning, we many still feel baffled about the way the emergence takes place, or why it takes place in just the way it does.
The nature of the conscious experience has been the largest single obstacle to physicalism, behaviourism, and functionalism in the philosophy of mind: These are all views that according to their opponents, can only be believed by feigning permanent anaesthesin. But many philosophers are convinced that we can divide and conquer: We may make progress by breaking the subject into different skills and recognizing that rather than a single self or observer we would do better to think of a relatively undirected whirl of cerebral activity, with no inner theatre, no inner lights, ad above all no inner spectator.
A fundamental philosophical topic both for its central place in any theory of knowledge, and its central place in any theory of consciousness. Philosophy in this area is constrained by a number of properties that we believe to hold of perception. (1) It gives us knowledge of the world around us (2) We are conscious of that world by being aware of “sensible qualities,” colours, sounds, tastes, smells, felt warmth, and the shapes and positions of objects in the environment. (3) Such consciousness is affected through highly complex information channels, such as the output of three different types of colour-sensitive cells in the eye, or the channels in the ear for interpreting pulses of air pressure as frequencies of sound. (4) There ensues even more neurophysiological coding of that information, and eventually higher-order brain functions bring it about that we interpreted the information so received (much of this complexity has been revealed by the difficulty of writing programs enabling commuters to recognize quite simple aspects of the visual scene.) The problem is to avoid thinking of there being a central, ghostly, conscious self. Fed information in the same way that a screen is fed information by a remote television camera. Once such a model is in place, experience will seem like a model getting between us and the world, and the direct objects of perception will seem to be private items in an inner theatre or sensorium. The difficulty of avoiding this model is especially acuter when we consider the secondary qualities of colour, sound, tactile feelings, and taste, which can easily seem to have a purely private existence inside the perceiver, like sensations of pain. Calling such supposed items names like sense data or percepts exacerbate the tendency. But once the model is in place, the fist property, the perception gives us knowledge or the inner world around us, is quickly threatened, for there now seem little connection between these items in immediate experience and any independent reality. Reactions to this problem include scepticism and idealism.
A more hopeful approach is to claim that complexities of (3) and (4) explain how we can have direct acquaintances of the world, than suggesting that the acquaintance we do have at best an emendable indiction. It is pointed out that perceptions are not like sensations, precisely because they have a content, or outer-directed nature. To have a perception is to be aware of the world as bing such-and-such a way, than to enjoy a mere modification of sensation. Nut. Such direct realism has to be sustained in the face of the evident personal (neurophysiological and other) factors determining how we perceive. One approach is to ask why it is useful to be conscious of what we perceive, when other aspects of our functioning work with information determining responses without any conscious awareness or intervention. A solution to this problem would offer the hope of making consciousness part of the natural world, than strange optional extra.
Even to be, that if one is without idea, one is without concept, and, in the same likeness that, if one is without concept he too is without idea. Idea (Gk., visible form) that may be a notion as if by stretching all the way from one pole, where it denotes a subjective, internal presence in the mind, somehow though t of as representing something about the orld, to the other pole, where it represents an eternal, timeless unchanging form or concept: The concept of the number series or of justice, for example, thought of as independent objects of enquiry and perhaps of knowledge. These two poles are not distinct in meaning by the term kept, although they give rise to many problems of interpretation, but between them they define a space of philosophical problems. On the one hand, ideas are that with which we think. Or in Locke’s terms, whatever the mind may ne employed about in thinking Looked at that way they seem to be inherently transient, fleeting, and unstable private presence. On the other, ideas provide the way in which objective knowledge can ne expressed. They are the essential components of understanding and any intelligible proposition that is true must be capable of being understood. Plato’s theory of “Form” is a celebration of the objective and timeless existence of ideas as concepts, and in this hand ideas are reified to the point where they make up the only real world, of separate and perfect models of which the empirical world is only a poor cousin, this doctrine, notably in the Timarus opened the way for the Neoplatonic notion of ideas as the thoughts of God. The concept gradually lost this other-worldly aspect, until after Descartes ideas become assimilated to whatever it is that lies in the mind of any thinking being.
Together with a general bias toward the sensory, so that what lies in the mind may be thought of as something like images, and a belief that thinking is well explained as the manipulation of images, this was developed by Locke, Berkeley, and Hume into a full-scale view of the understanding as the domain of images, although they were all aware of anomalies that were later regarded as fatal to this doctrine. The defects in the account were exposed by Kant, who realized that the understanding needs to be thought of more in terms of rules and organized principles than of any kind of copy of what is given in experience. Kant also recognized the danger of the opposite extreme (that of Leibniz) of failing to connect the elements of understanding with those of experience at all (Critique of Pure Reason).
It has become more common to think of ideas, or concepts as dependent upon social and especially linguistic structures, than the self-standing creatures of an individual mind, but the tension between the objective and the subjective aspects of the matter lingers on, for instance in debates about the possibility of objective knowledge, of indeterminacy in translation, and of identity between thoughts people entertain at one time and those that they entertain at another.
To possess a concept is able to deploy a term expressing it in making judgements: The ability connects with such things as recognizing when the term applies, and being able to understand the consequences of its application. The term “idea” was formerly used in the same way, but is avoided because of its association with subjective mental imagery, which may be irrelevant to the possession of concept. In the semantics of Frége, a concept is the reference of a predicate, and cannot be referred to by a subject term. Frége regarded predicates as incomplete expressions for a function, such as, sine . . . or log . . . is incomplete. Predicates refer to concepts, which themselves are “unsaturated,” and cannot be referred to by subject expressions (we thus get the paradox that the concept of a horse is not a concept). Although Frége recognized the metaphorical nature of the notion of a concept being unsaturated, he was rightly convinced that some such notion is needed to explain the unity of a sentence, and to prevent sentences from being thought of as mere lists of names.
Mental states have contents: A belief may have the content that I will catch the train, a hope may have the content that the prime minister will resign. A concept is something that is capable of being a constituent of such contents. More specifically, a concept is a way of thinking of something – a particular object, or property, or relation. Or another entity.
Several different concepts may each be ways of thinking of the same object. A person may think of himself in the first-person way, or think of himself as the spouse of May Smith, or as the person located in a certain room now. More generally, a concept “c” is such-and-such without believing “d” is such-and-such. As words can be combined to form structured sentences, concepts have also been conceived as combinable into structured complex contents. When these complex contents are expressed in English by “that . . . “ clauses, as in our opening examples, they will be capable of been true or false, depending on the way the world is.
Concepts are to be distinguished from stereotypes and from conceptions. The stereotypical spy may be a middle-level official down on his luck and in need of money, none the less, we can come to learn that Anthony Blunt, are historian and Surveyor of the Queen’s Picture, is a spy: We can come to believe that something falls under a concept while positively disbelieving that the same thing falls under the stereotype association with the concept. Similarly, a person’s conception of a just arrangement for resolving disputes may involve something like contemporary Western legal systems. But whether or not it would be correct, it is quite intelligible for someone to reject this conception by arguing that it does not adequately provide for the elements of fairness and respect that are required by the concept of justice.
A theory of a particular concept must be distinguished from a theory of the object or objects it picks out. The theory of the concept is part of the theory of thought and epistemology: A theory of the object or objects is part of metaphysics and ontology. Some figures in the history of philosophy-and perhaps even some of our contemporaries-are open to the accusation of not having fully respected the distinction between the two kinds of theory. Descartes appears to have moved from facts about the indubitability of the thought “I think,” containing the first-person way of thinking, to conclusions about the non-material nature of the object he himself was. But though the goals of a theory of concepts theory is required to have an adequate account to its relation to the other theory. A theory of concepts is unacceptable if it gives no account of how the concept is capable of picking out the object it evidently does pick out. A theory of objects is unacceptable if it makes it impossible to understand how we could have concepts of those objects.
A fundamental question for philosophy is: What individuates a given concept-that is, what makes it the one is, than any other concept? One answer, which has been developed in great detail, is that it is impossible to give a non-trivial answer to this question. An alternative addresses the question by stating from the ideas that a concept is individuated by the condition that must be satisfied if a thinker is to posses that concept and to be capable of having beliefs and other attitudes whose contents contain it as a constituent. So to take a simple case, on e could propose the logical concept “and” is individuated by this conditions: It is the unique concept “C” to possess which a thinker has to find these forms of inference compelling, without basing them on any further inference or information: From any to premisses “A” and “B,” “ABC” can be inferred: And from any premiss “ABC,” each of “A” and “B” can be inferred. Again, a relatively observational concept such as “round” can be individuated in part by stating that the thinker find specified contents containing it compelling when he has certain kinds of perception, and in part by relating those judgements containing the concept and which are based on perception that individuates a concept by saying what is required for a thinker to possess it can be described as giving the possession condition for the concept.
A possession condition for a particular concept may actually make use of that concept. The possession condition for “and” does not. We can also expect to use relatively observational concepts in specifying the kind of experiences that have to be of comment in the possession condition for relatively observational concepts. We must avoid, as mentioned of the concept in question as such, within the content of the attitudes attributed to the thinker in the possession condition. Otherwise we would be presupposing possession of the concept in an account that was meant to elucidate its possession, in talking of what the thinker finds compelling, the possession conditions can also respect an insight of the later Wittgenstein: That a thinker’s mastery of a concept is inextricably tied to how he finds it natural to go on in new cases in applying the concept.
Sometimes a family of concepts has this property: It is not possible to master any one of the members of the family without mastering the other. Two of the families that plausibly have this status are these: The family consisting of some simple concepts 0, 1, 2, . . . of the natural numbers and the corresponding concepts of numerical quantifiers there are 0 so-and-so’s, there is 1 so-and-so, . . . : And the family consisting of the concepts “belief” ad “desire.” Such families have come to be known as “local holisms.” A local Holism does not prevent the individuation of a concept by its possession condition. Rather, it demands that all the concepts in the family be individuated simultaneously. So one would say something of this form: Belief and desire form the unique pair of concepts C1 and C2 such that for a thinker to possess them is to meet such-and-such condition involving the thinker, C1 and C2. For these and other possession conditions to individuate properly, it is necessary that there be some ranking of the concepts treated, and the possession conditions for concepts higher in ranking must presuppose only possession of concepts at the same or lower level in the ranking.
A possession condition may in various ways make a thinker’s possession of a particular concept dependents upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinker’s perceptual experience. Perceptual experience represents the world for being a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience e to the subject’s environment. If this is so, then, more is of mention, that it is much greater of the experiences in a possession condition will make possession of that concept dependent in particular upon the environmental relations of the thinker. Also, from intuitive particularities, that evens though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary if the thinker’s social environment is varied. A possession condition that properly individuates such a concept must take into account the thinker’s social relations, in particular his linguistic relations.
Concepts have a normative dimension, a fact strongly emphasized by Kriple. For any judgement whose content involves s a given concept, there is a correctness condition for that judgement, a condition that is dependent in part upon the identity of the concept. The normative character of concepts also extends into the territory of a thinker’s reason for making judgements. A thinker’s visual perception can give him good reason for judging “That man is bald”: It does not by itself give him good reason for judging “Rostropovich is bald,” even if the man he sees is Rostropovich. All these normative connections must be explained by a theory of concepts. One approach to these matters is to look to the possession condition for a concept, and consider how the referent of the concept is fixed from it, together with the world. One proposal is that the referent if the concept is that object (or property, or function, . . . ) which makes the practices of judgement and inference in the possession condition always lead to true judgements and truth-preserving inferences. This proposal would explain why certain reasons are necessarily good reasons for judging given contents. Provided the possession condition permit s us to say what it is about a thinker’s previous judgements that make it the case that he is employing one concept rather than another, this proposal would also have another virtue. It would allow us to say how the correctness condition is determined for a newly encountered object. The judgement is correct if t he new object has the property that in fact makes the judgmental practices in the possession condition yield true judgements, or truth-preserving inferences.
Despite the fact that the unity of consciousness had been at the centre of pre-20th century research on the mind, early in the 20th century the notion almost disappeared. Logical atomism in philosophy and behaviourism in psychology were both unsympathetic to the notion. Logical atomism focussed on the atomic elements of cognition (sense data, simple propositional judgments, etc.), rather than on how these elements are tied together to form a mind. Behaviourism urged that we focus on behaviour, the mind being alternatively myth or something otherwise that we cannot and do not need of studying the mysteriousness of science, from which brings meaning and purpose to humanity. This attitude extended to consciousness, of course. The philosopher Daniel Dennett summarizes the attitude prevalent at the time this way: Consciousness may be the last bastion of occult properties, epiphenomena, immeasurable subjective states-in short, the one area of mind best left to the philosophers. Let them make fools of themselves trying to corral the quicksilver of ‘phenomenology’ into a respectable theory.
The unity of consciousness next became an object of serious attention in analytic philosophy only as late as the 1960s. In the years since, new work has appeared regularly. The accumulated literature is still not massive but the unity of consciousness has again become an object of serious study. Before we examine the more recent work, we need to explicate the notion in more detail than we have done so far and introduce some empirical findings. Both are required to understand recent work on the issue.
To expand on our earlier notion of the unity of consciousness, we need to introduce a pair of distinctions. Current works on consciousness labours under a huge, confusing terminology. Different theorists exchange dialogue over the excess consciousness, phenomenal consciousness, self-consciousness, simple consciousness, creature consciousness, states consciousness, monitoring consciousness, awareness as equated with consciousness, awareness distinguished from consciousness, higher orders thought, higher orders experience, qualia, the felt qualities of representations, consciousness as displaced perception, . . . and on and on and on. We can ignore most of this profusion but we do need two distinctions: between consciousness of objects and consciousness of our representations of objects, and between consciousness of representations and consciousness of self.
It is very natural to think of self-consciousness or, cognitive state more accurately, as a set of cognitive states. Self-knowledge is an example of such a cognitive state. There are plenty of things that I know bout self. I know the sort of thing I am: a human being, a warm-blooded rational animal with two legs. I know of many properties and much of what is happening to me, at both physical and mental levels. I also know things about my past, things I have done and that of whom I have been with other people I have met. But I have many self-conscious cognitive states that are not instances of knowledge. For example, I have the capacity to plan for the future-to weigh up possible courses of action in the light of goals, desires, and ambitions. I am capable of ca certain type of moral reflection, tide to moral self-and understanding and moral self-evaluation. I can pursue questions like, what sort of person I am? Am I the sort of person I want to be? Am I the sort of individual that I ought to be? This is my ability to think about myself. Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employing in my thought about other people and other objects.
When I say that I am a self-conscious creature, I am saying that I can do all these things. But what do they have in common? Could I lack some and still be self-conscious? These are central questions that take us to the heart of many issues in metaphysics, the philosophy of mind, and the philosophy of psychology.
Even so, with the range of putatively self-conscious cognitive states, one might naturally assume that there is a single ability that all presuppose. This is my ability to think about myself. I can only have knowledge about myself if I have beliefs about myself, and I can only have beliefs about myself if I can entertain thoughts about myself. The same can be said for autobiographical memories and moral self-understanding.
The proposing account would be on par with other noted examples of the deflationary account of self-consciousness. If, in at all, a straightforward explanation to what makes those of the “self contents” immune to error through misidentification concerning the semantics of self, then it seems fair to say that the problem of self-consciousness has been dissolved, at least as much as solved.
This proposed account would be on a par with other noted examples as such as the redundancy theory of truth. That is to say, the redundancy theory or the deflationary view of truth claims that the predicate ‘ . . . true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophic enquiry. The approach admits of different versions, but centres on the pints (1) that ‘it is true that p’ says no more nor less than ‘p’ (so, redundancy”) (2) that in less direct context, such as ‘everything he said was true’, or ‘all logical consequences of true propositions as true’, the predicated functions as a device enabling us to generalize rather than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from true propositions. For example, its translation is to infer that: (∀p, Q)(P & p ➞ q ➞ q)’ where there is no use of a notion of truth.
There are technical problems in interpreting all uses of the notion of truth in such ways, but they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as . . . ‘science aims at the truth’ or ‘truth is a norm governing discourse. Indeed, postmodernist writing frequently advocates that we must abandon such norms, along with a discredited ‘objective’ concept ion of truth. But perhaps, we can have the norms even when objectivity is problematic, since they can be framed within mention of truth: Science wants to be so that whenever science holds that ‘p’, when ‘p’‘. Discourse is to be regulated by the principle that it is wrong to assert ‘p’. When not-p.
It is important to stress how redundancy or the deflationary theory of self-consciousness, and any theory of consciousness that accords a serious role in self-consciousness to mastery of the semantics of the first-person pronoun, is motivated by an important principle that ha governed much of the development of analytical philosophy. This is the principle that the philosophical analysis of thought can only proceed through the philosophical analysis of language:
Thoughts differ from all else that is aid to be among the contents of the mind in being wholly communicable: It is of the essence of thought that I can convey to you the very thought that I have, as opposed to being able to tell you merely something about what my thought is like. It is of the essence of thought not merely to be communicable, but to be communicable, without residue, by means of language. In order to understand thought, it is necessary, therefore, to understand the means by which thought is expressed. We communicate thought by means of language because we have an implicit understanding of the workings of language, that is, of the principles governing the use of language, it is these principles, which relate to what is open to view in the employment of language, unaided by any supposed contact between mind and the senses that they carry. In order to analyses thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp. (Dummett, 1978)
So how can such thoughts be entertained by a thinker incapable of reflexively referring to himself as English speakers do with the first-person pronoun be plausibly ascribed thought with first-person contents? The thought that, despite all this, there are in fact first-person contents that do not presuppose mastery of the first-person pronoun is at the core of the functionalist theory of self-reference and first-person belief.
The best developed functionalist theory of self-reference has been deployed by Hugh Mellor (1988-1989). The basic phenomenon he is interested in explaining is what it is for a creature to have what he terms as subjective belief, which is to say, a belief whose content is naturally expressed by a sentence in the first-person singular and the present tense. Mellor starts from the functionalist premise that beliefs are causal functions from desires to actions. It is, of course, the emphasis on causal links between belief and action that make it plausible to think that belief might be independent of language and conscious belief, since “agency entails neither linguistic ability nor conscious belief. The idea that beliefs are causal functions from desires to actions can be deployed to explain the content of a give n belief through which the equation of truth conditions and utility conditions, where utility conditions are those in which the actions caused by the conjunction of that belief with a single desire result in the satisfaction of that desire. To expound forthwith, consider a creature ‘x’ who is hungry and has a desire for food at time ‘t’. That creature has a token belief b/(p) that conjoins with its desire for food to cause it to eat what is in front of it at that time. The utility condition of that belief is that there is food in front of it at that time. The utility condition of that belief is that there is food in from it of ‘x’ at that time. Moreover, for b/(p) to cause ‘x’ to eat what is in front of it at ‘t’, b/(p) must be a belief that ‘x’ has at ‘t’. Therefore, the utility/truth conditions of b/(p) is that whatever creature has this belief faces food when it is in fact facing food. And a belief with this content is, of course, the subjective belief whose natural linguistic expression would be “I am facing food now.” On the other hand, however, a belief that would naturally be expressed wit these words can be ascribed to a non-linguistic creature, because what makes it the belief that it is depending not on whether it can be linguistically expressed but on how it affects behaviour.
For in order to believe ‘p’, I need only be disposed to eat what I face if I feel hungry: A disposition which causal contiguity ensures that only my simultaneous hunger can provide, and only into making me eat, and only then. That’s what makes my belief refers to me and to when I have it. And that’s why I need have no idea who I am or what the time is, no concept of the self or of the present, no implicit or explicit grasp of any “sense” of “I” or “now,” to fix the reference of my subjective belies: Causal contiguity fixes them for me.
Causal contiguities, according to explanation may well be to why no internal representation of the self is required, even at what other philosophers have called the sub-personal level. Mellor believes that reference to distal objects can take place when in internal state serves as a causal surrogate for the distal object, and hence as an internal representation of that object. No such causal surrogate, and hence no such internal representation, is required in the case of subjective beliefs. The relevant casual component of subjective belies are the believer and the time.
The necessary contiguity of cause and effect is also the key to =the functionalist account of self-reference in conscious subjective belief. Mellor adopts a relational theory of consciousness, equating conscious beliefs with second-order beliefs to the effect that one is having a particular first-order subjective belief, it is, simply a fact about our cognitive constitution that these second-order beliefs are reliably, though of course fallibly, generated so that we tend to believe that we believe things that we do in fact believe.
The contiguity law in Leibniz, extends the principles that there are no discontinuous changes in nature, “natura non facit saltum,” nature makes no leaps. Leibniz was able to use the principle to criticize the mechanical system of Descartes, which would imply such leaps in some circumstances, and to criticize contemporary atomism, which implied discontinuous changes of density at the edge of an atom however, according to Hume the contiguity of evens is an important element in our interpretation of their conjunction for being causal.
Others attending to the functionalist points of view are it’s the advocate’s Putnam and Stellars, and its guiding principle is that we can define mental states by a triplet of relations: What typically situations to them, in of what effects them have on other mental states and what affects them have on behaviour. The definition need not take the form of a simple analysis, but if we could write down the totality of axioms, or postulates, or platitudes that govern our theories about what things are apt to cause (for example) a belief state, what effects it would have on a variety of other mental states, and what effect it us likely to have on behaviour, then we would have done all that is needed to make the state a proper theoretical notion. It would be implicitly defined by these theses. Functionalism is often compared with descriptions of a computer, since according to it mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlying hardware or “realization” of the program the machine is running. The principal advantage of functionalism includes its fit with the way we know of mental states both of ourselves and others are via their effects on behaviour and other mental states. As with behaviourism, critics charge that structurally complex items that do not bear mental states might nevertheless imitate the functions that are cited. According to this criticism functionalism is too generous, and would count too many things as having minds. It is also queried whether functionalism is too parochial, able to see mental similarities only when there is causal similarity, when our actual practices of interpretation enable us to ascribe thoughts and desires to persons whose causal structure may be rather different from our own. It may then seem as though beliefs and desires can be “variably realized” in causal architectures, just as much as they can be in different neurophysiological stares.
The anticipation, to revolve os such can find the tranquillity in functional logic and mathematics as function, a relation that auspicates members of one class “X” with some unique member “y” of another “Y.” The associations are written as y = f(x), The class “X” is called the domain of the function, and “Y” its range. Thus “the father of x” is a function whose domain includes all people, and whose range is the class of male parents. Whose range is the class of male parents, but the relation “by that x” is not a function, because a person can have more than one son. “Sine x” is a function from angles of a circle πx, is a function of its diameter x, . . . and so on. Functions may take sequences x1. . . .Xn as their arguments, in which case they may be thought of as associating a unique member of “Y” with any ordered, n-tuple as argument. Given the equation y = f(x1 . . . Xn), x1 . . . Xn is called the independent variables, or argument of the function, and “y” the dependent variable or value, functions may be many-one, meaning that differed not members of “X” may take the same member of “Y” as their value, or one-one when to each member of “X” may take the same member of “Y” as their value, or one-one when to each member of “X” their corresponds a distinct member of “Y.” A function with “X” and “Y” is called a mapping from “X” to”Y” is also called a mapping from “X” to “Y,” written f X ➝ Y, if the function is such that (1) If x, y ∈ X and f(x) = f(y) then x’s = y, then the function is an injection from to Y, if also: (2) If y ∈ Y, then (∃x)(x ∈ X & Y = f(x)). Then the function is a bi-jection of “X” onto “Y.” A di-jection is both an injection and a sir-jection where a subjection is any function whose domain is “X” and whose range is the whole of “Y.” Since functions ae relations a function may be defined asa set of “ordered” pairs
One of Frége’s logical insights was that a concept is analogous of a function, as a predicate analogous to the expression for a function (a functor). Just as “the square root of x” takes you from one number to another, so “x is a philosopher’ refers to a function that takes us from his person to truth-values: True for values of “x” who are philosophers, and false otherwise.”
Functionalism can be attached both in its commitment to immediate justification and its claim that all medially justified beliefs ultimately depend on the former. Though, in cases, is the latter that is the position’s weaker point, most of the critical immediately unremitting have been directed ti the former. As much of this criticism has ben directed against some particular from of immediate justification, ignoring the possibility of other forms. Thus much anti-foundationalist artillery has been derricked at the “myth of the given” to consciousness in pre-conceptual, pre-judgmental mode, and that beliefs can be justified on that basis (Sellars, 1963) The most prominent general argument against immediate justifications is a whatever use taken does so if the subject is justified in supposing that the putative justifier has what it takes to do so. Hence, since the justification of the original belief depends on the justification of the higher level belief just specified, the justification is not immediate after all. We may lack adequate support for any such higher level as requirement for justification: And if it were imposed we would be launched on an infinite regress, for a similar requirement would hold equally for the higher belief that the original justifier was efficacious.
The reflexive considerations initiated by functionalism evoke an intelligent system, or mind, may fruitfully be thought of as the result of a number of sub-systems performing more simple tasks in co-ordination switch each other. The sub-systems may be envisaged as homunculi, or small, relatively stupid agents. The archetype is a digital computer, where a battery of switches capable of only one response (on or off) can make u a machine that can play chess, write dictionaries, etc.
Nonetheless, we are confronted with the range of putatively self-conscious cognitive states, one might assume that there is a single ability that is presupposed. This is my ability to think about myself, and I can only have knowledge about myself if I have beliefs about myself, and I can only have beliefs about myself if I can entertain thoughts about myself. The same can be said for autographical memories and moral self-understanding. These are ways of thinking about myself.
Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employ in my thoughts about other people and other objects. My knowledge that I am a human being deploys certain conceptual abilities that I can also deploy in thinking that you are a human being. The same holds when I congratulate myself for satisfying the exacting moral standards of autonomous moral agencies. This involves concepts and descriptions that can apply equally to me and to others. On the other hand, when I think about myself, I am also putting to work an ability that I cannot put to work in thinking about other people and other objects. This is precisely the ability to apply those concepts and descriptions to myself. It has become common to refer to this ability as the ability to entertain “I’-thoughts.
What is an, “I”-thought” Obviously, an “I”-thought is a thought that involves self-reference. I can think an, “I”-thought only by thinking about myself. Equally obvious, though, this cannot be all that there is to say on the subject. I can think thoughts that involve a self-reference but are not “I”-thoughts. Suppose I think that the next person to set a parking ticket in the centre of Toronto deserves everything he gets. Unbeknown to be, the very next recipient of a parking ticket will be me. This makes my thought self-referencing, but it does not make it an “I”-thought. Why not? The answer is simply that I do not know that I will be the next person to get a parking ticket in downtown Toronto. Is ‘A’, is that unfortunate person, then there is a true identity statement of the form I = A, but I do not know that this identity holds, I cannot be ascribed the thoughts that I will deserve everything I get? And si I am not thinking genuine “I”-thoughts, because one cannot think a genuine “I”-thought if one is ignorant that one is thinking about oneself. So it is natural to conclude that “I”-thoughts involve a distinctive type of self-reference. This is the sort of self-reference whose natural linguistic expression is the first-person pronoun “I,” because one cannot be the first-person pronoun without knowing that one is thinking about oneself.
This is still not quite right, however, because thought contents can be specific, perhaps, they can be specified directly or indirectly. That is, all cognitive states to be considered, presuppose the ability to think about oneself. This is not only that they all have to some commonality, but it is also what underlies them all. We can see is more detail what this suggestion amounts to. This claim is that what makes all those cognitive states modes of self-consciousness is the fact that they all have content that can be specified directly by means of the first person pronoun “I” or indirectly by means of the direct reflexive pronoun “he,” such they are first-person contents.
The class of first-person contents is not a homogenous class. There is an important distinction to be drawn between two different types of first-person contents, corresponding to two different modes in which the first person can be employed. The existence of this distinction was first noted by Wittgenstein in an important passage from The Blue Book: That there are two different cases in the use of the word “I” (or, “my”) of which is called “the use as object” and “the use as subject.” Examples of the first kind of use are these” “My arm is broken,” “I have grown six inches,” “I have a bump on my forehead,” “The wind blows my hair about.” Examples of the second kind are: “I see so-and-so,” “I try to lift my arm,” “I think it will rain,” “I have a toothache.” (Wittgenstein 1958)
The explanations given are of the distinction that hinge on whether or not they are judgements that involve identification. However, one can point to the difference between these two categories by saying: The cases of the first category involve the recognition of a particular person, and there is in these cases the possibility of an error, or as: The possibility of can error has been provided for . . . It is possible that, say in an accident, I should feel a pain in my arm, see a broken arm at my side, and think it is mine when really it is my neighbour’s. And I could, looking into a mirror, mistake a bump on his forehead for one on mine. On the other hand, there is no question of recognizing when I have a toothache. To ask “are you sure that it is you who have pains?” would be nonsensical (Wittgenstein, 1958?).
Wittgenstein is drawing a distinction between two types of first-person contents. The first type, which is describes as invoking the use of “I” as object, can be analysed in terms of more basic propositions. Such that the thought “I am B” involves such a use of “I.” Then we can understand it as a conjunction of the following two thoughts” “a is B” and “I am a.” We can term the former a predication component and the latter an identification component (Evans 1982). The reason for braking the original thought down into these two components is precisely the possibility of error that Wittgenstein stresses in the second passages stated. One can be quite correct in predicating that someone is B, even though mistaken in identifying oneself as that person.
To say that a statement “a is B” is subject to error through misidentification relative to the term “a” means the following is possible: The speaker knows some particular thing to be “B,” but makes the mistake of asserting “a is B” because, and only because, he mistakenly thinks that the thing he knows to be “B” is what “a” refers to (Shoemaker 1968).
The point, then, is that one cannot be mistaken about who is being thought about. In one sense, Shoemaker’s criterion of immunity to error through misidentification relative to the first-person pronoun (simply “immunity to error through misidentification”) is too restrictive. Beliefs with first-person contents that are immune to error through identification tend to be acquired on grounds that usually do result in knowledge, but they do not have to be. The definition of immunity to error trough misidentification needs to be adjusted to accommodate them by formulating it in terms of justification rather than knowledge.
The connection to be captured is between the sources and grounds from which a belief is derived and the justification there is for that belief. Beliefs and judgements are immune to error through misidentification in virtue of the grounds on which they are based. The category of first-person contents being picked out is not defined by its subject matter or by any points of grammar. What demarcates the class of judgements and beliefs that are immune to error through misidentification is evidence base from which they are derived, or the information on which they are based. So, to take by example, my thought that I have a toothache is immune to error through misidentification because it is based on my feeling a pain in my teeth. Similarly, the fact that I am consciously perceiving you makes my belief that I am seeing you immune to error through misidentification.
To say that a statement “a is b” is subject to error through misidentification relative to the term “a” means that some particular thing is “b,” because his belief is based on an appropriate evidence base, but he makes the mistake of asserting “a is b” because, and only because, he mistakenly thinks that the thing he justified believes to be “b” is what “a” refers to.
Beliefs with first-person contents that are immune to error through misidentification tend to be acquired on grounds that usually result in knowledge, but they do not have to be. The definition of immunity to error through misidentification needs to be adjusted to accommodate by formulating in terms of justification rather than knowledge. The connection to be captured is between the sources and grounds from which a beef is derived and the justification there is for that belief. Beliefs and judgements are immune to error through misidentification in virtue of the grounds on which they are based. The category of first-person contents picked out is not defined by its subject matter or by any points of grammar. What demarcates the class of judgements and beliefs that ae immune to error through misidentification is the evidence base from which they are derived, or the information on which they are based. For example, my thought that I have a toothache is immune to error through misidentification because it is based on my feeling a pain in my teeth. Similarly, the fact that I am consciously perceiving you makes my belief that I am seeing you immune to error through misidentification.
A suggestive definition is to say that a statement “a is b” is subject to error through misidentification relative to the term “a” means that the following is possible: The speaker is warranted in believing that some particular thing is “b,” because his belief is based on an appropriate evidence base, but he makes the mistake of asserting “a is b” because, and only because, he mistakenly thinks that the thing he justified believes to be “b” is what “a” refers to.
First-person contents that are immune to error through misidentification can be mistaken, but they do have a basic warrant in virtue of the evidence on which they are based, because the fact that they are derived from such an evidence base is closely linked to the fact that they are immune to error thought misidentification. Of course, there is room for considerable debate about what types of evidence base ae correlated with this class of first-person contents. Seemingly, then, that the distinction between different types of first-person content can be characterized in two different ways. We can distinguish between those first-person contents that are immune to error through misidentification and those that are subject to such error. Alternatively, we can discriminate between first-person contents with an identification component and those without such a component. For purposes rendered, in that these different formulations each pick out the same classes of first-person contents, although in interestingly different ways.
All first-person consent subject to error through misidentification contain an identification component of the form “I am a” and employ of the first-person-pronoun contents with an identification component and those without such a component. In that identification component, does it or does it not have an identification component? Clearly, then, on pain of an infinite regress, at some stage we will have to arrive at an employment of the first-person pronoun that does not have to arrive at an employment of the first-person pronoun that does not presuppose an identification components, then, is that any first-person content subject to error through misidentification will ultimately be anchored in a first-person content that is immune to error through misidentification.
It is also important to stress how self-consciousness, and any theory of self-consciousness that accords a serious role in self-consciousness to mastery of the semantics of the first-person pronoun, is motivated by an important principle that has governed much if the development of analytical philosophy. This is the principle that the philosophical analysis of though can only proceed through the principle analysis of language. The principle has been defended most vigorously by Michael Dummett.
Even so, thoughts differ from that is said to be among the contents of the mind in being wholly communicable: It is of the essence of thought that I can convey to you the very thought that I have, as opposed to being able to tell you merely something about what my though is like. It is of the essence of thought not merely to be communicable, but to be communicable, without residue, by means of language. In order to understand thought, it is necessary, therefore, to understand the means by which thought is expressed (Dummett 1978).
Dummett goes on to draw the clear methodological implications of this view of the nature of thought: We communicate thoughts by means of language because we have an implicit understanding of the workings of language, that is, of the principles governing the use of language, it is these principles, which relate to what is open to view in the mind other than via the medium of language that endow our sentences with the senses that they carry. In order to analyse thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp.
Many philosophers would want to dissent from the strong claim that the philosophical analysis of thought through the philosophical analysis of language is the fundamental task of philosophy. But there is a weaker principle that is very widely held as The Thought-Language Principle.
As it stands, the problem between to different roles that the pronoun “he” can play of such oracle clauses. On the one hand, “he” can be employed in a proposition that the antecedent of the pronoun (i.e., the person named just before the clause in question) would have expressed using the first-person pronoun. In such a situation that holds that “he,” is functioning as a quasi-indicator. Then when “he” is functioning as a quasi-indicator, it be written as “he.” Others have described this as the indirect reflexive pronoun. When “he” is functioning as an ordinary indicator, it picks out an individual in such a way that the person named just before the clause of o reality need not realize the identity of himself with that person. Clearly, the class of first-person contents is not homogenous class.
There is canning obviousness, but central question that arises in considering the relation between the content of thought and the content of language, namely, whether there can be thought without language as theories like the functionalist theory. The conception of thought and language that underlies the Thought-Language Principe is clearly opposed to the proposal that there might be thought without language, but it is important to realize that neither the principle nor the considerations adverted to by Dummett directly yield the conclusion that there cannot be that in the absence of language. According to the principle, the capacity for thinking particular thoughts can only be analysed through the capacity for linguistic expression of those thoughts. On the face of it, however, this does not yield the claim that the capacity for thinking particular thoughts cannot exist without the capacity for their linguistic expression.
Thoughts being wholly communicable not entail that thoughts must always be communicated, which would be an absurd conclusion. Nor does it appear to entail that there must always be a possibility of communicating thoughts in any sense in which this would be incompatible with the ascription of thoughts to a nonlinguistic creature. There is, after all, a primary distinction between thoughts being wholly communicable and it being actually possible to communicate any given thought. But without that conclusion there seems no way of getting from a thesis about the necessary communicability of thought to a thesis about the impossibility of thought without language.
A subject has distinguished self-awareness to the extent that he is able to distinguish himself from the environment and its content. He has distinguished psychological self-awareness to the extent that he is able to distinguish himself as a psychological subject within a contract space of other psychological subjects. What does this require? The notion of a non-conceptual point of view brings together the capacity to register one’s distinctness from the physical environment and various navigational capacities that manifest a degree of understanding of the spatial nature of the physical environment. One very basic reason for thinking that these two elements must be considered together emerges from a point made in the richness of the self-awareness that accompanies the capacity to distinguish the self from the environment is directly proportion are to the richness of the awareness of the environment from which the self is being distinguished. So no creature can understand its own distinction from the physical enjoinment without having an independent understanding of the nature of the physical environment, and since the physical environment is essentially spatial, this requires an understanding of the spatial nature of the physical environment. But this cannot be the whole story. It leaves unexplained why an understanding should be required of this particular essential feature of the physical environment. Afer all, it is also an essential feature of the physical environment that it be composed of a an objects that have both primary and secondary qualities, but thee is n reflection of this in the notion of a non-conceptual point of view. More is needed to understand the significance of spatiality.
First, to take a step back from primitive self-consciousness to consider the account of self-identifying first-person thoughts as given in Gareth Evans’s Variety of Reference (1982). Evens places considerable stress on the connection between the form of self-consciousness that he is considering and a grasp of the spatial nature of the world. As far as Evans is concerned, the capacity to think genuine first-person thought implicates a capacity for self-location, which he construes in terms of a thinker’s to conceive of himself as an idea with an element of the objective order. Thought, do not endorse the particular gloss that Evans puts on this, the general idea is very powerful. The relevance of spatiality to self-consciousness comes about not merely because he world is spatial but also because the self-consciousness subject is himself a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be aware that one is a spatial element of the world without a grasp of the spatial nature of the world. Evans tends to stress a dependence in the opposite direction between these notions
The very idea of a perceived objective spatial world brings with it the ideas of the subject for being in the world, which the course of his perceptions due to his changing position in the world and to the more or less stable in the way of the world is. The idea that there is an objective world and the idea that the subject is somewhere cannot be separated, and where he is given by what he can perceive (Evans 1982).
But the main criteria of his work is very much that the dependence holds equally in the opposite direction.
It seems that this general idea can be extrapolated and brought to bar on the notion of a non-conceptual point of view. What binds together the two apparently discrete components of a non-conceptual point of view is precisely the fact that a creature’s self-awareness must be awareness of itself as a spatial bing that acts up and is acted upon by the spatial world. Evans’s own gloss on how a subject’s self-awareness, is awareness of himself as a spatial being involves the subject’s mastery of a simple theory explaining how the world makes his perceptions as they are, with principles like “I perceive such and such, such and such holds at P; So (probably) am P and “I am, such who does not hold at P, so I cannot really be perceiving such and such, even though it appears that I am” (Evans 1982). This is not very satisfactory, though. If the claim is that the subject must explicitly hold these principles, then it is clearly false. If, on the other hand, the claim is that these are the principles of a theory that a self-conscious subject must tacitly know, then the claim seems very uninformative in the absence of a specification of the precise forms of behaviour that can only be explained by there ascription of such a body of tacit knowledge. We need an account of what it is for a subject to be correctly described as possessing such a simple theory of perception. The point however, is simply that the notion of as non-conceptual point of view as presented, can be viewed as capturing, at a more primitive level, precisely the same phenomenon that Evans is trying to capture with his notion of a simple theory of perception.
But it must not be forgotten that a vital role in this is layed by the subject’s own actions and movements. Appreciating the spatiality of the environment and one’s place in it is largely a function of grasping one’s possibilities for action within the environment: Realizing that f one wants to return to a particular place from here one must pass through these intermediate places, or that if there is something there that one wants, one should take this route to obtain it. That this is something that Evans’s account could potentially overlook emerge when one reflects that a simple theory of perception of the form that described could be possessed and decoyed by a subject that only moves passively, in that it incorporates the dimension of action by emphasizing the particularities of navigation.
Moreover, stressing the importance of action and movement indicates how the notion of a non-conceptual point of view might be grounded in the self-specifying in for action to be found in visual perception. By that in thinking particularly of the concept of an affordance so central to Gibsonian theories of perception. One important type of self-specifying information in the visual field is information about the possibilities for action and reaction that the environment affords the perceiver, by which that affordancs are non-conceptual first-person contents. The development of a non-conceptual point of view clearly involves certain forms of reasoning, and clearly, we will not have a full understanding of he notion of a non-conceptual point of view until we have an explanation of how this reasoning can take place. The spatial reasoning involved in over which this reasoning takes place. The spatial reasoning involved in developing a non-conceptual point of view upon the world is largely a matter of calibrating different affordances into an integrated representation of the world.
In short, any learned cognitive ability be contractible out of more primitive abilities already in existence. There are good reason to think that the perception of affordance is innate. And so if, the perception of affordances is the key to the acquisition of an integrated spatial representation of the environment via the recognition of affordance symmetries, affordance transitives, and affordance identities, then it is precisely conceivable that the capacities implicated in an integrated representation of the world could emerge non-mysteriously from innate abilities.
Nonetheless, there are many philosophers who would be prepared to countenance the possibility of non-conceptual content without accepting that to use the theory of non-conceptual content so solve the paradox of self-consciousness. This is ca more substantial task, as the methodology that is adapted rested on the first of the marks of content, namely that content-bearing states serve to explain behaviour in situations where the connections between sensory input and behaviour output cannot be plotted in a law-like manner (the functionalist theory of self-reference). As such, not of allowing that every instance of intentional behaviour where there are no such law-like connections between sensory input and behaviour output needs to be explained by attributing to the creature in question of representational states with first-person contents. Even so, many such instances of intentional behaviour do need to be explained in this way. This offers a way of establishing the legitimacy of non-conceptual first-person contents. What would satisfactorily demonstrate the legitimacy of non-conceptual first-person contents would be the existence of forms of behaviour in pre-linguistic or non-linguistic creatures for which inference to the best understanding or explanation (which in this context includes inference to the most parsimonious understanding, or explanation) demands the ascription of states with non-conceptual first-person contents.
The non-conceptual first-person contents and the pick-up of self-specifying information in the structure of exteroceptive perception provide very primitive forms of non-conceptual self-consciousness, even if forms that can plausibly be viewed as in place rom. birth or shortly afterward. The dimension along which forms of self-consciousness must be compared is the richest of the conception of the self that they provide. All of which, a crucial element in any form of self-consciousness is how it enables the self-conscious subject to distinguish between self and environment-what many developmental psychologists term self-world dualism. In this sense, self-consciousness is essentially a contrastive notion. One implication of this is that a proper understanding of the richness of the conception that we take into account the richness of the conception of the environment with which it is associated. In the case of both somatic proprioception and the pick-up of self-specifying information in exteroceptive perception, there is a relatively impoverished conception of the environment. One prominent limitation is that both are synchronic than diachronic. The distinction between self and environment that they offer is a distinction that is effective at a time but not over time. The contrast between propriospecific and exterospecific invariant in visual perception, for example, provides a way for a creature to distinguish between itself and the world at any given moment, but this is not the same as a conception of oneself as an enduring thing distinguishable over time from an environment that also endures over time.
The notion of a non-conceptual point of view brings together the capacity to register one’s distinctness from the physical environment and various navigational capacities that manifest a degree of understanding of the spatial nature of the physical environment. One very basic reason for thinking that these elements must be considered together emerges from a point made from which the richness of the awareness of the environment from which the self is being distinguished. So no creature can understand its own distinctness from the physical environment without having an independent understanding of the nature of the physical environment, and since the physical environment is essentially spatial, this requires an understanding of the spatial nature of the physical environment. But this cannot be the whole story. It leaves unexplained why an understanding should be required of this particular essential feature of the physical environment. Afer all, it is also an essential feature of the physical environment that it be composed of objects that have both primary and secondary qualities, but there is no reflection of this in the notion of a non-conceptual point of view. More is needed to understand the significance of spatiality.
The general idea is very powerful, that the relevance of spatiality to self-consciousness comes about not merely because the world is spatial but also because the self-conscious subject is himself a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be aware that one is a spatial element of the world, and one cannot be aware that one is a spatial element of the world without a grasp of the spatial nature of the world.
The very idea of a perceivable, objectively spatial would be the idea of the subject for being in the world, with the course of his perceptions due to his changing position in the world and to the more or less stable way the world is. The idea that there is an objective world and the idea that the subject is somewhere cannot be separated, and where he is given by what he can perceive.
One possible reaction to consciousness, is that it is only because unrealistic and ultimately unwarranted requirements are being placed on what are to count as genuinely self-referring first-person thoughts. Suppose for such an objection will be found in those theories tat attempt to explain first-person thoughts in a way that does not presuppose any form of internal representation of the self or any form of self-knowledge. Consciousness arises because mastery of the semantics of he first-person pronoun is available only to creatures capable of thinking first-person thoughts whose contents involve reflexive self-reference and thus, seem to presuppose mastery of the first-person pronoun. If, thought, it can be established that the capacity to think genuinely first-person thoughts does not depend on any linguistic and conceptual abilities, then arguably the problem of circularity will no longer have purchase.
There is no account of self-reference and genuinely first-person thought that can be read in a way that poses just such a direct challenge to the account of self-reference underpinning the conscious. This is the functionalist account, although spoken before, the functionalist view, reflexive self-reference is a completely non-mysterious phenomenon susceptible to a functional analysis. Reflexive self-reference is not dependent upon any antecedent conceptual or linguistic skills. Nonetheless, the functionalist account of a reflexive self-reference is deemed to be sufficiently rich to provide the foundation for an account of the semantics of the first-person pronoun. If this is right, then the circularity at which consciousness is at its heart, and can be avoided.
The circularity problems at the root of consciousness arise because mastery of the semantics of the first-person pronoun requires the capacity to think fist-person thoughts whose natural expression is by means of the first-person pronoun. It seems clear that the circle will be broken if there are forms of first-person thought that are more primitive than those that do not require linguistic mastery of the first-person pronoun. What creates the problem of capacity circularity is the thought that we need to appeal to first-person contents in explaining mastery of the first-person pronoun, combined with the thought that any creature capable of entertaining first-person contents will have mastered the first-person pronoun. So if we want to retain the thought that mastery of the first-person pronoun can only be explained in terms of first-person contents, capacity circularity can only be avoided if there are first-person contents that do not presuppose mastery of the first-person pronoun.
On the other hand, however, it seems to follow from everything earlier mentioned about “I”-thoughts that conscious thought in the absence of linguistic mastery of the first-person pronoun is a contradiction in terms. First-person thoughts have first-person contents, where first-person contents can only be specified in terms of either the first-person pronoun or the indirect reflexive pronoun. So how could such thoughts be entertained by a thinker incapable of reflexive self-reference? How can a thinker who is not capable of reflexively reference? How can a thinker who is not the first-person pronoun be plausibly ascribed thoughts with first-person contents? The thought that, despite all this, there are real first-person contents that do not presuppose mastery of the first-person pronoun is at the core of the functionalist theory of self-reference and first-person belief.
The best developed functionalist theory of self-reference has been provided by Hugh Mellor (1988-1089). The basic phenomenon he is interested in explaining is what it is for a creature to have what he terms a “subjective belief,” that is to say, a belief whose content is naturally expressed by a sentence in the first-person singular and the present tense. The explanation of subjective belief that he offers makes such beliefs independent of both linguistic abilities and conscious beliefs. From this basic account he constructs an account of conscious subjective beliefs and the of the reference of the first-person pronoun “I.” These putatively more sophisticated cognitive states are casually derivable from basic subjective beliefs.
Mellor starts from the functionalist premise that beliefs are causal functions from desire to actions. It is, of course, the emphasis in causal links between belief and action that make it plausible to think that belief might be independent of language and conscious belief “agency entails neither linguistic ability nor conscious belief” (Mellor 1988). The idea that beliefs are causal functions from desires to action can be deployed to explain the content of a given belief via the equation of truth conditions and utility conditions, where utility conditions are those in which are actions caused by the conjunction of that belief with a single desire result in the satisfaction of that desire. We can see how this works by considering Mellor’s own example. Consider a creature ‘x’ who is hungry and has a desire for food at time ‘t’. That creature has a token belief b/(p) that conjoins with its desire for food to cause it to eat that there food in front of ‘x at that time. Moreover, for b/(p) to cause ‘x’ to eat what is in front of it at ‘t’. b/(p) mus t be a belief that ‘x’ has at ‘t’. For Mellor, therefore, the utility/truth condition of b/(p) is that whatever creature has this belief faces when it is actually facing food. And a belief with this content is, of course, the subjective belief whose natural linguistic expression would be ‘I am facing food now’ on the other hand, however, belief that would naturally be expressed with these words can be ascribed to a non-linguistic creature, because what makes it te belief that it is depends no on whether it can be linguistically expressed but on how it affects behaviour.
What secures a self-reference in belief b/(p) is the contiguity of cause and effect. The essence of a subjective conjointly with a desire or set of desires, and the relevant sort of conjunction is possible only if it is the same agent at the same time who has the desire and the belief.
For in order to believe ‘p’, I need only be disposed to eat what I face if I feel hungry, a disposition which causal contiguity ensures that only my simultaneous hunger can provoke, and only into masking me eat, and only then.
Scientific knowledge is an extension of ordinary language into greater levels of abstraction and precision through reliance upon geometric and numerical relationships. We speculate that the seeds of the scientific imagination were planted in ancient Greece, as opposed to Chinese or Babylonian culture, partly because the social, political, and economic climate in Greece was more open to the pursuit of knowledge with marginal cultural utility. Another important factor was that the special character of Homeric religion allowed the Greeks to invent a conceptual framework that would prove useful in future scientific investigation, but this inheritance from Greek philosophy was wedded to some essential features in beliefs about the origin of the cosmos that the paradigm for classical physics emerged.
All the same, newer logical frameworks point to the logical condition for description and comprehension of experience such as to quantum physics. While normally referred to as the principle of complementarity, the use of the word principle was unfortunate in that complementarity is not a principle as that word is used in physics. A complementarity is rather a logical framework for the acquisition and comprehension of scientific knowledge that discloses a new relationship between physical theory and physical reality that undermines all appeals to metaphysics.
The logical conditions for description in quantum mechanics, the two conceptual components of classical causality, space-time description and energy-momentum conservation are all mutually exclusive and can be coordinated through the limitations imposed by Heisenberg’s indeterminacy principle.
The logical farmwork of complementarity is useful and necessary when the following requirements are met: (1) When the theory consists of two individually complete constructs: (2) when the constructs preclude one another in a description of the unique physical situation to which they both apply, (3) when both constitute a complete description of that situation. As we are to discover a situation, in which complementarity clearly applies, we necessarily confront an imposing limit to our knowledge of this situation. Knowledge can never be complete in the classical sense because we are able simultaneously to apply the mutual exclusive constructs that make up the complete description.
Why, then, must we use classical descriptive categories, like space-time descriptions and energy-momentum conservation, in our descriptions of quantum events? If classical mechanics is an approximation of the actual physical situation, it would seem to follow that classically; descriptive categories are not adequate to describe this situation. If, for example, quantities like position and momentum are abstractions with properties that are “definable and observable only through their interactions with other systems,” why should we represent these classical categories as if they were actual quantities in physical theory and experiment? However, these categories are rarely discussed, but it carries some formidable implications for the future of scientific thought.
Nevertheless, our journeys through which the corses of times generations we historically the challenge when it became of Heidegger' theory of spatiality distinguishes that concludes to three different types of space: (1) world-space, (2) regions (Gegend), and (3) Dasein's spatiality. What Heidegger calls "world-space" is space conceived as an “arena” or “container” for objects. It captures both our ordinary conception of space and theoretical space-in particular absolute space. Chairs, desks, and buildings exist “in” space, but world-space is independent of such objects, much like absolute space “in which” things exist. However, Heidegger thinks that such a conception of space is an abstraction from the spatializing conduct of our everyday activities. The things that we deal with are near or far relative to us; according to Heidegger, this nearness or farness of things is how we first become familiar with that which we (later) represent to ourselves as "space." This familiarity is what renders the understanding of space (in a "container" metaphor or in any other way) possible. It is because we act spatially, going to places and reaching for things to use, that we can even develop a conception of abstract space at all. What we normally think of as space-world-space-turns out not to be what space fundamentally is; world-space is, in Heidegger's terminology, space conceived as vorhanden. It is an objectified space founded on a more basic space-of-action.
Since Heidegger thinks that space-of-action is the condition for world-space, he must explain the former without appealing to the latter. Heidegger's task then is to describe the space-of-action without presupposing such world-space and the derived concept of a system of spatial coordinates. However, this is difficult because all our usual linguistic expressions for describing spatial relations presuppose world-space. For example, how can one talk about the "distance between you and me" without presupposing some sort of metric, i.e., without presupposing an objective access to the relation? Our spatial notions such as "distance," "location," etc. must now be redescribed from a standpoint within the spatial relation of self (Dasein) to the things dealt with. This problem is what motivates Heidegger to invent his own terminology and makes his discussion of space awkward. In what follows I will try to use ordinary language whenever possible to explain his principal ideas.
The space-of-action has two aspects: regions (space as Zuhandenheit) and Dasein's spatiality (space as Existentiale). The sort of space we deal within our daily activity is "functional" or zuhanden, and Heidegger's term for it is "region." The places we work and live-the office, the park, the kitchen, etc.-all have different regions that organize our activities and contextualize “equipment.” My desk area as my work region has a computer, printer, telephone, books, etc., in their appropriate “places,” according to the spatiality of the way in which I work. Regions differ from space viewed as a "container"; the latter notion lacks a "referential" organization with respect to our context of activities. Heidegger wants to claim that referential functionality is an inherent feature of space itself, and not just a "human" characteristic added to a container-like space.
In our activity, how do we specifically stand with respect to functional space? We are not "in" space as things are, but we do exist in some spatially salient manner. What Heidegger is trying to capture is the difference between the nominal expression "we exist in space" and the adverbial expression "we exist spatially." He wants to describe spatiality as a mode of our existence rather than conceiving space as an independent entity. Heidegger identifies two features of Dasein's spatiality-"de-severance" (Ent-fernung) and "directionality" (Ausrichtung).
De-severance describes the way we exist as a process of spatial self-determination by “making things available” to ourselves. In Heidegger's language, in making things available we "take in space" by "making the farness vanish" and by "bringing things close"
We are not simply contemplative beings, but we exist through concretely acting in the world-by reaching for things and going to places. When I walk from my desk area into the kitchen, I am not simply changing locations from point A to B in an arena-like space, but I am “taking in space” as I move, continuously making the “farness” of the kitchen “vanish,” as the shifting spatial perspectives are opened as I go along.
This process is also inherently "directional." Every de-severing is aimed toward something or in a certain direction that is determined by our concern and by specific regions. I must always face and move in a certain direction that is dictated by a specific region. If I want to get a glass of ice tea, instead of going out into the yard, I face toward the kitchen and move in that direction, following the region of the hallway and the kitchen. Regions determine where things belong, and our actions are coordinated in directional ways accordingly.
De-severance, directionality, and regionality are three ways of describing the spatiality of a unified Being-in-the-world. As aspects of Being-in-the-world, these spatial modes of being are equiprimordial.9 10 Regions "refer" to our activities, since they are established by our ways of being and our activities. Our activities, in turn, are defined in terms of regions. Only through the region can our de-severance and directionality be established. Our object of concern always appears in a certain context and place, in a certain direction. It is because things appear in a certain direction and in their places “there” that we have our “here.” We orient ourselves and organize our activities, always within regions that must already be given to us.
Heidegger's analysis of space does not refer to temporal aspects of Being-in-the-world, even though they are presupposed. In the second half of Being and Time he explicitly turns to the analysis of time and temporality in a discussion that is significantly more complex than the earlier account of spatiality. Heidegger makes the following five distinctions between types of time and temporality: (1) the ordinary or "vulgar" conception of time; this is time conceived as Vorhandenheit. (2) world-time; this is time as Zuhandenheit. Dasein's temporality is divided into three types: (3) Dasein's inauthentic (uneigentlich) temporality, (4) Dasein's authentic (eigentlich) temporality, and (5) temporal originality or “temporality as such.” The analyses of the vorhanden and zuhanden modes of time are interesting, but it is Dasein's temporality that is relevant to our discussion, since it is this form of time that is said to be founding for space. Unfortunately, Heidegger is not clear about which temporality plays this founding role.
We can begin by excluding Dasein's inauthentic temporality. This mode of time refers to our unengaged, "average" way in which we regard time. It is the “past we forget” and the “future we expect,” all without decisiveness and resolute understanding. Heidegger seems to consider that this mode of temporality is the temporal dimension of de-severance and directionality, since de-severance and directionality deal only with everyday actions. As such, inauthentic temporality must itself be founded in an authentic basis of some sort. The two remaining candidates for the foundation are Dasein's authentic temporality and temporal originality.
Dasein's authentic temporality is the "resolute" mode of temporal existence. Authentic temporality is realized when Dasein becomes aware of its own finite existence. This temporality has to do with one's grasp of his or her own life as a whole from one's own unique perspective. Life gains meaning as one's own life-project, bounded by the sense of one's realization that he or she is not immortal. This mode of time appears to have a normative function within Heidegger's theory. In the second half of BT he often refers to inauthentic or "everyday" mode of time as lacking some primordial quality which authentic temporality possesses.
In contrast, temporal originality is the formal structure of Dasein's temporality itself. In addition to its spatial Being-in-the-world, Dasein also exists essentially as "projection." Projection is oriented toward the future, and this futural orientation regulates our concern by constantly realizing various possibilities. Temporality is characterized formally as this dynamic structure of "a future that makes present in the process of having been." Heidegger calls the three moments of temporality-the future, the present, and the past-the three ecstases of temporality. This mode of time is not normative but rather formal or neutral; as Blattner argues, the temporal features that constitute Dasein's temporality describe both inauthentic and authentic temporality.
There are some passages that indicate that authentic temporality is the primary manifestation of temporalities, because of its essential orientation toward the future. For instance, Heidegger states that "temporality first showed itself in anticipatory resoluteness." Elsewhere, he argues that "the ‘time’ which is accessible to Dasein's common sense is not primordial, but arises rather from authentic temporality." In these formulations, authentic temporalities is said to found other inauthentic modes. According to Blattner, this is "by far the most common" interpretation of the status of authentic time.
However, to ague with Blattner and Haar, in that there are far more passages where Heidegger considers temporal originality as temporality as distinct from authentic temporality, and founding for it and for Being-in-the-world as well. Here are some examples: Temporality has different possibilities and different ways of temporalizing itself. The basic possibilities of existence, the authenticity and inauthenticity of Dasein, are grounded ontologically on possible temporalizations of temporality. Time is primordial as the temporalizing of temporality, and as such it makes possible the Constitution of the structure of care.
Heidegger's conception seems to be that it is because we are fundamentally temporal-having the formal structure of ecstatico-horizonal unity-that we can project, authentically or inauthentically, our concernful dealings in the world and exist as Being-in-the-world. It is on this account that temporality is said to found spatiality.
Since Heidegger uses the term "temporality" rather than "authentic temporality" whenever the founding relation is discussed between space and time, I will begin the following analysis by assuming that it is originary temporality that founds Dasein's spatiality. On this assumption two interpretations of the argument are possible, but both are unsuccessful given his phenomenological framework.
I will then consider the possibility that it is "authentic temporality" which founds spatiality. Two interpretations are also possible in this case, but neither will establish a founding relation successfully. I will conclude that despite Heidegger's claim, an equiprimordial relation between time and space is most consistent with his own theoretical framework. I will now evaluate the specific arguments in which Heidegger tries to prove that temporality founds spatiality.
The principal argument, entitled "The Temporality of the Spatiality that is Characteristic of Dasein." Heidegger begins the section with the following remark: Though the expression `temporality' does not signify what one understands by "time" when one talks about `space and time', nevertheless spatiality seems to make up another basic attribute of Dasein corresponding to temporality. Thus with Dasein's spatiality, existential-temporal analysis seems to come to a limit, so that this entity that we call "Dasein," must be considered as `temporal' `and' as spatial coordinately.
Accordingly, Heidegger asks, "Has our existential-temporal analysis of Dasein thus been brought to a halt . . . by the spatiality that is characteristic of Dasein . . . and Being-in-the-world?" His answer is no. He argues that since "Dasein's constitution and its ways to being possible are ontologically only on the basis of temporality," and since the "spatiality that is characteristic of Dasein . . . belongs to Being-in-the-world," it follows that "Dasein's specific spatiality must be grounded in temporality."
Heidegger's claim is that the totality of regions-de-severance-directionality can be organized and re-organized, "because Dasein as temporality is ecstatico-horizonal in its Being." Because Dasein exists futurally as "for-the-sake-of-which," it can discover regions. Thus, Heidegger remarks: "Only on the basis of its ecstatico-horizonal temporality is it possible for Dasein to break into space."
However, in order to establish that temporality founds spatiality, Heidegger would have to show that spatiality and temporality must be distinguished in such a way that temporality not only shares a content with spatiality but also has additional content as well. In other words, they must be truly distinct and not just analytically distinguishable. But what is the content of "the ecstatic-horizonal constitution of temporality?" Does it have a content above and beyond Being-in-the-world? Nicholson poses the same question as follows: Is it human care that accounts for the characteristic features of human temporality? Or is it, as Heidegger says, human temporality that accounts for the characteristic features of human care, serves as their foundation? The first alternative, according to Nicholson, is to reduce temporality to care: "the specific attributes of the temporality of Dasein . . . would be in their roots not aspects of temporality but reflections of Dasein's care." The second alternative is to treat temporality as having some content above and beyond care: "the three-fold constitution of care stems from the three-fold constitution of temporality."
Nicholson argues that the second alternative is the correct reading.18 Dasein lives in the world by making choices, but "the ekstasis of temporality lies well prior to any choice . . . so our study of care introduces us to a matter whose scope outreaches care: the ekstases of temporality itself." Accordingly, "What was able to make clear is that the reign of temporal ekstasis over the choices we make accords with the place we occupy as finite beings in the world."
But if Nicholson's interpretation is right, what would be the content of "the ekstases of temporality itself," if not some sort of purely formal entity or condition such as Kant's "pure intuition?" But this would imply that Heidegger has left phenomenology behind and is now engaging in establishing a transcendental framework outside the analysis of Being-in-the-world, such that this formal structure founds Being-in-the-world. This is inconsistent with his initial claim that Being-in-the-world is itself foundational.
I believe Nicholson's first alternative offers a more consistent reading. The structure of temporality should be treated as an abstraction from Dasein's Being-in-the-world, specifically from care. In this case, the content of temporality is just the past and the present and the future ways of Being-in-the-world. Heidegger's own words support this reading: "as Dasein temporalizes itself, a world is too," and "the world is neither present-at-hand nor ready-to-hand, but temporalizes itself in temporality." He also states that the zuhanden "world-time, in the rigorous sense of the existential-temporal conception of the world, belongs to temporality itself." In this reading, "temporality temporalizing itself," "Dasein's projection," and "the temporal projection of the world" are three different ways of describing the same "happening" of Being-in-the-world, which Heidegger calls "self-directive."
However, if this is the case, then temporality does not found spatiality, except perhaps in the trivial sense that spatiality is built into the notion of care that is identified with temporality. The content of "temporality temporalizing itself" simply is the various openings of regions, i.e., Dasein's "breaking into space." Certainly, as Stroeker points out, it is true that "nearness and remoteness are spatio-temporal phenomena and cannot be conceived without a temporal moment." But this necessity does not constitute a foundation. Rather, they are equiprimordial. The addition of temporal dimensions does indeed complete the discussion of spatiality, which abstracted from time. But this completion, while it better articulates the whole of Being-in-the-world, does not show that temporality is more fundamental.
If temporality and spatiality are equiprimordial, then all of the supposedly founding relations between temporality and spatiality could just as well be reversed and still hold true. Heidegger's view is that "because Dasein as temporality is ecstatico-horizonal in its Being, it can take along with it a space for which it has made room, and it can do so factically and constantly." But if Dasein is essentially a factical projection, then the reverse should also be true. Heidegger appears to have assumed the priority of temporality over spatiality perhaps under the influence of Kant, Husserl, or Dilthey, and then based his analyses on that assumption.
However, there may still be a way to save Heidegger's foundational project in terms of authentic temporality. Heidegger never specifically mentions authentic temporality, since he suggests earlier that the primary manifestation of temporality is authentic temporality, such a reading may perhaps be justified. This reading would treat the whole spatio-temporal structure of Being-in-the-world. The resoluteness of authentic temporality, arising out of Dasein's own "Being-towards-death," would supply a content to temporality above and beyond everyday involvements.
Heidegger is said to have its foundations in resoluteness, Dasein determines its own Situation through anticipatory resoluteness, which includes particular locations and involvements, i.e., the spatiality of Being-in-the-world. The same set of circumstances could be transformed into a new situation with different significance, if Dasein chooses resolutely to bring that about. Authentic temporality in this case can be said to found spatiality, since Dasein's spatiality is determined by resoluteness. This reading moreover enables Heidegger to construct a hierarchical relation between temporality and spatiality within Being-in-the-world rather than going outside of it to a formal transcendental principle, since the choice of spatiality is grasped phenomenologically in terms of the concrete experience of decision.
Moreover, one might argue that according to Heidegger one's own grasp of "death" is uniquely a temporal mode of existence, whereas there is no such weighty conception involving spatiality. Death is what makes Dasein "stand before itself in its own most potentiality-for-Being." Authentic Being-towards-death is a "Being toward a possibility-indeed, toward a distinctive possibility of Dasein itself." One could argue that notions such as "potentiality" and "possibility" are distinctively temporal, nonspatial notions. So "Being-towards-death," as temporal, appears to be much more ontologically "fundamental" than spatiality.
However, Heidegger is not yet out of the woods. I believe that labelling the notions of anticipatory resoluteness, Being-towards-death, potentiality, and possibility specifically as temporal modes of being (to the exclusion of spatiality) begs the question. Given Heidegger's phenomenological framework, why assume that these notions are only temporal (without spatial dimensions)? If Being-towards-death, potentiality-for-Being, and possibility were "purely" temporal notions, what phenomenological sense can we make of such abstract conceptions, given that these are manifestly our modes of existence as bodily beings? Heidegger cannot have in mind such an abstract notion of time, if he wants to treat authentic temporality as the meaning of care. It would seem more consistent with his theoretical framework to say that Being-towards-death is a rich spatio-temporal mode of being, given that Dasein is Being-in-the-world.
Furthermore, the interpretation that defines resoluteness as uniquely temporal suggests too much of a voluntaristic or subjectivistic notion of the self that controls its own Being-in-the-world as for its future. This would drive a wedge between the self and its Being-in-the-world, thereby creating a temporal "inner self" which can decide its own spatiality. However, if Dasein is Being-in-the-world as Heidegger claims, then all of Dasein's decisions should be viewed as concretely grounded in Being-in-the-world. If so, spatiality must be an essential constitutive element.
Hence, authentic temporality, if construed narrowly as the mode of temporality, at first appears to be able to found spatiality, but it also commits Heidegger either to an account of time that is too abstract, or to the notion of the self far more like Sartre's than his own. What is lacking in Heidegger's theory that generates this sort of difficulty is a developed conception of Dasein as a lived body-a notion more fully developed by Merleau-Ponty.
The elements of a more consistent interpretation of authentic temporality are present in Being and Time. This interpretation incorporates a view of "authentic spatiality" in the notion of authentic temporality. This would be Dasein's resolutely grasping its own spatio-temporal finitude with respect to its place and its world. Dasein is born at a particular place, but lives in a particular place, dies in a particular place, all of which it can relate to in an authentic way. The place Dasein lives is not a place of anonymous involvements. The place of Dasein must be there where its own potentiality-for-Being is realized. Dasein's place is thus a determination of its existence. Had Heidegger developed such a conception more fully, he would have seen that temporality is equiprimordial with thoroughly spatial and contextual Being-in-the-world. They are distinguishable but equally fundamental ways of emphasizing our finitude.
The internal tensions within his theory eventually leads Heidegger to reconsider his own positions. In his later period, he explicitly develops what may be viewed as a conception of authentic spatiality. For instance, in "Building Dwelling Thinking," Heidegger states that Dasein's relations to locations and to spaces inheres in dwelling, and dwelling is the basic character of our Being. The notion of dwelling expresses an affirmation of spatial finitude. Through this affirmation one acquires a proper relation to one's environment.
But the idea of dwelling is in fact already discussed in Being and Time, regarding the term "Being-in-the-world," Heidegger explains that the word "in" is derived from "innan"-to "reside," "habitare," "to dwell." The emphasis on "dwelling" highlights the essentially "worldly" character of the self.
Thus from the beginning Heidegger had a conception of spatial finitude, but this fundamental insight was undeveloped because of his ambition to carry out the foundational project that favoured time. From the 1930's on, as Heidegger abandons the foundational project focussing on temporality, the conception of authentic spatiality comes to the fore. For example, in Discourse on Thinking Heidegger considers the spatial character of Being as "that-which-regions (die Gegnet)." The peculiar expression is a re-conceptualization of the notion of "region" as it appeared in Being and Time. Region is given an active character and defined as the "openness that surrounds us" which "comes to meet us." By giving it an active character, Heidegger wants to emphasize that region is not brought into being by us, but rather exists in its own right, as that which expresses our spatial existence. Heidegger states that "one needs to understand ‘resolve’ (Entschlossenheit) as it is understood in Being and Time: as the opening of man [Dasein] particularly undertaken by him for openness, . . . which we think of as that-which-regions." Here Heidegger is asserting an authentic conception of spatiality. The finitude expressed in the notion of Being-in-the-world is thus transformed into an authentic recognition of our finite worldly existence in later writings.
The return to the conception of spatial finitude in the later period shows that Heidegger never abandoned the original insight behind his conception of Being-in-the-world. But once committed to this idea, it is hard to justify singling out an aspect of the self -temporality-as the foundation for the rest of the structure. All of the existentiales and zuhanden modes, which constitute the whole of Being-in-the-world, are equiprimordial, each mode articulating different aspects of a unified whole. The preference for temporality as the privileged meaning of existence reflects the Kantian residue in Heidegger's early doctrine that he later rejected as still excessively subjectivistic.
Meanwhile, it seems that it is nonetheless, natural to combine this close connection with conclusions by proposing an account of self-consciousness, as to the capacity to think “I”-thoughts that are immune to error through misidentification, though misidentification varies with the semantics of the “self”-this would be a redundant account of self-consciousness. Once we have an account of what it is to be capable of thinking “I”-thoughts, we will have explained everything distinctive about self-consciousness. It stems from the thought that what is distinctive about “I”-thoughts are that they are either themselves immune to error or they rest on further “I” -Thoughts that are immune in that way.
Once we have an account of what it is to be capable of thinking thoughts that are immune to error through misidentification, we will have explained everything about the capacity to think “I”-thoughts. As it would to claim of deriving from the thought that immunity to error through misidentification depends on the semantics of the “self.”
Once, again, that when we have an account of the semantics in that we will have explained everything distinctive about the capacity to think thoughts that are immune to error through misidentification.
The suggestion is that the semantics of “self-ness” will explain what is distinctive about the capacity to think thoughts immune to error through misidentification. Semantics alone cannot be expected to explain the capacity for thinking thoughts. The point in fact, that all that there is to the capacity of think thoughts that are immune tp error is the capacity to think the sort of thought whose natural linguistic expression involves the “self,” where this capacity is given by mastery of the semantics of “self-ness.” Yielding, to explain what it is to master the semantics of “self-ness,” especially to think thoughts immune to error through misidentification.
On this view, the mastery of the semantics of “self-ness” may be construed as for the single most important explanation in a theory of “self-consciousness.”
Its quickened reformulation might be put to a defender of “redundancy” or the deflationary theory is how mastery of the semantics of “self-ness” can make sense of the distinction between “self-ness contents” that are immune to error through misidentification and the “self contents” that lack such immunity. However, this is only an apparent difficulty when one remembers that those of the “selves” content is immune to error through misidentification, because, those employing ‘”I” as object, were able in having to break down their component elements. The identification component and the predication components that for which if the composite identification components of each are of such judgements that mastery of the semantics of “self-regulatory” content must be called upon to explain. Identification component are, of course, immune to error through misidentification.
It is also important to stress how the redundancy and the deflationary theory of self-consciousness, and any theory of self-consciousness that accords a serious role in self-consciousness to mastery of the semantics of the “self-ness,” are motivated by an important principle that has governed much of the development of analytical philosophy. The principle is the principle that the analysis of thought can only continue thought, the philosophical analysis of language such that we communicate thoughts by means of language because we have an implicit understanding of the workings of language, that is, of the principle governing the use of language: It is these principles, which relate to what is open to view and mind other that via the medium of language, which endow our sentences with the senses that they carry. In order to analyse thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp.
Still, at the core of the notion of broad self-consciousness is the recognition of what consciousness is the recognition of what developmental psychologist’s call “self-world dualism.” Any subject properly described as self-conscious must be able to register the distinction between himself and the world, of course, this is a distinction that can be registered in a variety of way. The capacity for self-ascription of thoughts and experiences, in combination with the capacity to understand the world as a spatial and causally structured system of mind-independent objects, is a high-level way of registering of this distinction.
Consciousness of objects is closely related to sentience and to being awake. It is (at least) being in somewhat of a distinct informational and behavioural intention where its responsive state is for one's condition as played within the immediateness of environmental surroundings. It is the ability, for example, to process and act responsively to information about food, friends, foes, and other items of relevance. One finds consciousness of objects in creatures much less complex than human beings. It is what we (at any rate first and primarily) have in mind when we say of some person or animal as it is coming out of general anaesthesia, ‘It is regaining consciousness’ as consciousness of objects is not just any form of informational access to the world, but the knowing about and being conscious of, things in the world.
We are conscious of our representations when we are conscious, not (just) of some object, but of our representations: ‘I am seeing [as opposed to touching, smelling, tasting] and seeing clearly [as opposed too dimly].’ Consciousness of our own representations it is the ability to process and act responsively to information about oneself, but it is not just any form of such informational access. It is knowing about, being conscious of, one's own psychological states. In Nagel's famous phrase (1974), when we are conscious of our representations, it is ‘like something’ to have them. If, that which seems likely, there are forms of consciousness that do not involve consciousness of objects, they might consist in consciousness of representations, though some theorists would insist that this kind of consciousness be not of representations either (via representations, perhaps, but not of them).
The distinction just drawn between consciousness of objects and consciousness of our representations of objects may seem similar to Form's (1995) contributes of a well-known distinction between P- [phenomenal] and A- [access] consciousness. Here is his definition of ‘A-consciousness’: "A state is A-conscious if it is poised for direct control of thought and action." He tells us that he cannot define ‘P-consciousness’ in any "remotely non-circular way" but will use it to refer to what he calls "experiential properties,” what it is like to have certain states. Our consciousness of objects may appear to be like A-consciousness. It is not, however, it is a form of P-consciousness. Consciousness of an object is-how else can we put it?-consciousness of the object. Even if consciousness is just informational excess of a certain kind (something that Form would deny), it is not all form of informational access and we are talking about conscious access here. Recall the idea that it is like something to have a conscious state. Other closely related ideas are that in a conscious state, something appears to one, that conscious states have a ‘felt quality’. A term for all this is phenomenology: Conscious states have a phenomenology. (Thus some philosophers speak of phenomenal consciousness here.) We could now state the point we are trying to make this way. If I am conscious of an object, then it is like something to have that object as the content of a representation.
Some theorists would insist that this last statement be qualified. While such a representation of an object may provide everything that a representation has to have for its contents to be like something to me, they would urge, something more is needed. Different theorists would add different elements. For some, I would have to be aware, not just of the object, but of my representation of it. For others, I would have directorial implications that infer of the certain attentive considerations to its way or something other than is elsewhere. We cannot go into this controversy here. As, we are merely making the point that consciousness of objects is more than Form's A-consciousness.
Consciousness self involves, not just consciousness of states that it is like something to have, but consciousness of the thing that has them, i.e., of ones-self. It is the ability to process and act responsively to information about oneself, but again it is more than that. It is knowing about, being conscious of, oneself, indeed of itself as itself. And consciousness of oneself in this way it is often called consciousness of self as the subject of experience. Consciousness of oneself as oneself seems to require indexical adeptness and by preference to a special indexical ability at that, not just an ability to pick out something out but to pick something out as oneself. Human beings have such self-referential indexical ability. Whether any other creatures have, it is controversial. The leading nonhuman candidate would be chimpanzees and other primates whom they have taught enough language to use first-person pronouns.
The literature on consciousness sometimes fails to distinguish consciousness of objects, consciousness of one's own representations, and consciousness of self, or treat one three, usually consciousness of one's own representations, as actualized of its owing totality in consciousness. (Conscious states do not have objects, yet is not consciousness of a representation either. We cannot pursue that complication here.) The term ‘conscious’ and cognates are ambiguous in everyday English. We speak of someone regaining consciousness-where we mean simple consciousness of the world. Yet we also say things like, She was haphazardly conscious of what motivated her to say that-where we do not mean that she lacked either consciousness of the world or consciousness of self but rather than she was not conscious of certain things about herself, specifically, certain of her own representational states. To understand the unity of consciousness, making these distinctions is important. The reason is this: the unity of consciousness takes a different form in consciousness of self than it takes in either consciousness of one's own representations or consciousness of objects.
So what is unified consciousness? As we said, the predominant form of the unity of consciousness is being aware of several things at the same time. Intuitively, this is the notion of several representations being aspects of a single encompassing conscious state. A more informative idea can be gleaned from the way philosophers have written about unified consciousness. As emerging from what they have said, the central feature of unified consciousness is taken to be something like this unity of consciousness: A group of representational relations related to each other that to be conscious of any of them is to be conscious of others of them and of the group of them as a single group.
Call this notion (x). Now, unified consciousness of some sort can be found in all three of the kinds of consciousness we delineated. (It can be found in a fourth, too, as we will see in a moment.) We can have unified consciousness of: Objectively represented to us; These are existent representations of themselves, but are contained in being alone, that in their findings are a basis held to oneself, that of something has of each the discerning character to value their considerations in the qualities of such that represents our qualifying phenomenon. In the first case, the represented objects would appear as aspects of a single encompassing conscious states. In the second case, the representations themselves would thus appear. In the third case, one is aware of oneself as a single, unified subject. Does (x) fit all three (or all four, including the fourth yet to be introduced)? It does not. At most, it fits the first two. Let us see how this unfolds.
Its collective and unified consciousness manifests as of such a form that most substantively awaken sustenance are purposive and may be considered for occurring to consciousness. Is that one has of the world around one (including one's own body) as aspects of a single world, of the various items in it as linked to other items in it? What makes it unified can be illustrated by an example. Suppose that I am aware of the computer screen in front of me and of the car sitting in my driveway. If awareness of these two items is not unified, I will lack the ability to compare the two. If I cannot bring the car as I am aware of it to the state in which I am aware of the computer screen, I could not answer questions such as, Is the car the same colour as the WordPerfect icon? Or even, As I am experiencing them, is the car to the left or to the right of the computer screen? We can compare represented items in these ways only if we are aware of both items together, as parts of the same field or state or act of conscious. That is what unified consciousness doe for us. (x) fits this kind of unified consciousness well. There are a couple of disorders of consciousness in which this unity seems to break down or be missing. We will examine them shortly.
Unified consciousness of one's own representations is the consciousness that we have of our representations, consciousness of our own psychological states. The representations by which we are conscious of the world are particularly important but, if those theorists who maintain that there are forms of consciousness that does not have objects are right, they are not the only ones. What makes consciousness of our representations unified? We are aware of many representations together, so that they appear as aspects of a single state of consciousness. As with unified consciousness of the world, here we can compare items of which we have unified consciousness. For example, we can compare what it is like to see an object to what it is like to touch the same object. Thus, (x) fits this kind of unified consciousness well, too.
When one has unified consciousness of self, it is to occur that at least one signifies its own awareness of oneself, not just as the subject but in Kant's words, as the “single common subject” of many representations and the single common agent of various acts of deliberation and action.
This is one of the two forms of unified consciousness that (x) does not fit. When one is aware of oneself as the common subject of experiences, the common agent of actions, one is not aware of several objects. Some think that when one is aware of oneself as subject, one is not aware of oneself as an object at all. Kant believed this. Whatever the merits of this view, when one is clearly aware of oneself as the single common subject of many representations, one is not aware of several things. As an alternative, one is aware of, and knows that one is aware of, the same thing-via many representations. Call this kind of unified consciousness (Y). Although (Y) is different form (x), we still have the core idea: Unified consciousness consists in tying what is contained in several representations, here most representations of oneself, together so that they are all part of a single field or state or act of consciousness.
Unified consciousness of self has been argued to have some very special properties. In particular, there is a small but important literature on the idea that the reference to oneself as oneself by which one achieves awareness of oneself as subject involves no ‘identification.’ Generalizing the notion a bit, some claim that reference to self does not proceed by way of attribution of properties or features to oneself at all. One argument for this view is that one is or could be aware of oneself as the subject of each of one's conscious experiences. If so, awareness of self is not what Bennett call ‘experience-dividing’-statements expressing it have "no direct implications of the form “I” will experience C rather than D.” If this is so, the linguistic activities using first person pronouns by which we call ourselves subject and the representational states that result have to have some unusual properties.
Finally, we need to distinguish a fourth site of unified consciousness. Let us call it unity of focus. Unity of focus is our ability to pay unified attention to objects, one's representations, and one's own self. It is different from the other sorts of unified consciousness. In the other three situations, consciousness ranges over many alternate objects or many instances of consciousness of an object (in unified consciousness of self). Unity of focus picks out one such item (or a small numbers of them). Wundt captures what I have in mind well in his distinction between the field of consciousness and the focus of consciousness. The consciousness of a single item on which one is focussing is unified because one is aware of many aspects of the item in one state or act of consciousness (especially relational aspects, e.g., any dangers it poses, how it relates to one's goals, etc.) and one is aware of many different considerations with respect to it in one state or act of consciousness (goals, how well one is achieving them with respect to this object, etc.). (x) does not fit this kind of unified consciousness any better than it fit unified consciousness of self? Here that we are not, or need not be, aware of most items. Instead, one is integrating most properties of an item, especially properties that involve relationships to oneself, and integrating most of one's abilities and applying them to the item, and so on. Call this form of unified consciousness (z). One way to think of the affinity of (z) (a unified focus) to (x) and (Y) is this. (z) occurs within (x) and (Y)-within unified consciousness of world and self.
Though this has often been overlooked, all forms of unified consciousness come in both simultaneous and across-time versions. That is to say, the unity can consist in links of certain kinds among phenomena occurring at the same time (synchronically) and it can consist in links of certain kinds among phenomena occurring at different times (diachronically). In its synchronic form, it consists in such things as our ability to compare items with one of another, for example, to see if an item fits into another item. Diachronically, it consists in a certain crucial form of memory, namely, our ability to retain a representation of an earlier object in the right way and for long enough to bring it as recalled into current consciousness of currently represented objects in the same as we do with simultaneously represented objects. Though this process across time has always been called the unity of consciousness, sometimes even to the exclusion of the synchronic unity just delineated, another good name for it would be continuity of consciousness. Note that this process of relating earlier to current items in consciousness is more than, and perhaps different from, the learning of new skills and associations. Even severe amnesiacs can do the latter.
That consciousness can be unified across time and at given time points merited of how central unity of consciousness is to cognition. Without the ability to retain representations of earlier objects and unite them with current represented objects, most complex cognition would simply be impossible. The only bits of language that one could probably understand, for example, would be single words; The simplest of sentences is an entity spread over time. Now, unification in consciousness might not be the only way to unite earlier cognitive states (earlier thoughts, earlier experiences) with current ones but it is a central way and the one best known to us. The unity of consciousness is central to cognition.
Justly as thoughts differ from all else that is said to correspond among the contents of the mind in being wholly communicable, it is of the essence of thought that I can convey to you the very thought that I have, as opposed to being able to tell you merely something about what my thought is like. It is of the essence of thought not merely to be communicable, but to be communicable, without excess, by means of language. In order to understand thought, it is necessary, therefore, to understand the means by which thought is expressed.
We communicate thoughts by means of language because we have an implicit understanding of the workings of language, that is, of the principles governing the use of language. Of these principles, which relate to what is open to view in the employment of language, unaided by any supposed contact between mind and mind other than a formal medium of language, which endow our sentences with the senses that they carry. In order to analyses thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp.
By noting that (x), (y) and (z) are not the only kinds of mental unity. Our remarks about (z), specifically about what can be integrated in focal attention, might already have suggested as much. There is unity in the exercise of our cognitive capacities, unity that consists of integration of motivating factors, perceptions, beliefs, etc., and there is unity in the outputs, unity that consists of integration of behaviour.
Human beings bring a strikingly wide range of factors to bear on a cognitive task such as seeking to characterize something or trying to decide what to do about something. For example, we can bring to bear of what we want, and what we believe, and of our attitudinal values for which we can of our own self, situation, and context, allotted from each of our various senses: It has continuing causality in the information about the situation, other people, others' beliefs, desires, attitudes, etc.; The resources of however many languages we have possession in the availabilities for us, and include of the many-sided kinds of memory, bodily sensations, our various and very diverse problem-solving skills, . . . and so on. Not only can we bring all these elements to bear, we can integrate them in a way that is highly structured and ingeniously appropriate to our goals and the situation(s) before us. This form of mental unity could appropriately be called unity of cognition. Unity of consciousness often goes with unity of cognition because one of our means of unifying cognition with respect to some object or situation is to focus on it consciously. However, there is at least some measure of unified cognition in many situations of which we are not conscious, as is testified by our ability to balance, control our posture, manoeuver around obstacles while our
consciousness is entirely absorbed with something else, and so on.
At the other end of the cognitive process, we find an equally interesting form of unity, what we might call unity of behaviour, our ability to establish uninterruptedly some progressively rhythmic and keenly independent method for which to integrate our limbs, eyes, and bodily attitude, etc. The precision and complexity of the behavioural coordination we can achieve would be difficult to exaggerate. Think of a concert pianist performing the complicated work.
One of the most interesting ways to study psychological phenomena is to see what happens when they or related phenomena break down. Phenomena that look simple and seamless when functioning smoothly often turns out to have all sorts of structure when they begin to malfunction. Like other psychological phenomena, we would expect unified consciousness to be open to being damaged, distorted, etc., too. If the unity of consciousness is as important to cognitive functioning as we have been suggesting, such damage or distortion should create serious problems for the people to whom it happens. The unity of consciousness is damaged and distorted in both naturally-occurring and experimental situations. Some of these situations are indeed very serious for those undergoing them.
In fact, unified consciousness can break down in what look to be two distinct ways. There are situations in which saying that one unified conscious being has split into two unified conscious beings without the unity itself being destroyed is natural or even significantly damaged, and situations in which always we have one being with one instance of consciousness. However, the unity itself may be damaged or even destroyed. In the former cases, there is reason to think that a single instance of unified consciousness has become two (or something like two). In the latter cases, unity of consciousness has been compromised in some way but nothing suggests that anything have split.
The point in fact, is that it is possibly the most challenging and persuasive source of problems in the whole of philosophy. Our own consciousness may be the most basic of fact confronting us, yet it is almost impossible to say that consciousness is. Is yours like yours? Is ours like that of animals? Might machines come to have consciousness? Is it possible that there might be disembodied consciousness? Whatever complex biological and neural processes go on backstage, it is my consciousness that provides the theatre where my experiences and thoughts have their existence: Where my desires are felt and where my intentions are formed. But then how am I to conceive the “I,” or self that is the spectator of this theatre? One of the difficulties in thinking about consciousness is that the problems seem not to be scientific ones: Leibniz remarked that if we could construct a machine that could think and feel, and blow it up to the size of a mill and thus be able to examine its working parts as thoroughly as we pleased, we would still not find copiousness, and draw the conclusion that consciousness resides in simple subjects, not complex ones. Even if we are convinced that consciousness somehow emerges from the complexity of brain functioning, we may still feel baffled about the way the emergence takes place, or why it takes place in just the way it does.
Subsequently, it is natural to concede that a given thought has a natural linguistic expression. We are also saying something about how it is appropriate to characterize the contents of that thought. We are saying something about what is being thought. This “I” term is given by the sentence that follows the “that” clause in reporting a thought, a belief, or any propositional attitude. The proposal, then, is that “I”-thoughts are all and only the thoughts whose propositional contents constitutively involve the first-person pronoun. This is still not quite right, however, because thought contents can be specified in ways. They can be specified directly or indirectly.
In the examination of the functionalist account of self-reference as a possible strategy, although it is not ultimately successful, attention to the functionalist account reveal the correct approach for solving the paradox of self-consciousness. The successful response to the paradox of seif
consciousness must reject the classical view of contents. The thought that, despite all this, there are first-person contents that do not presuppose mastery of the first-person pronoun is at the core of the functionalist theory of self-reference and first-person belief.
The best developed functionalist theory of self-reference has been provided by Hugh Mellor (1988-1989). As, the basic phenomenon in the explaining to, is what it is for a creature to have what is termed as a subjective belief, that is to say, a belief whose content is naturally expressed by a sentence in the first-person singular and the present tense. The explanation of subjective beliefs that offers to makes such beliefs independent of both linguistic abilities and conscious beliefs. From this basic account of construing an account of conscious subjective beliefs and then of the reference of the first-person pronoun “I.” These putatively more sophisticated cognitive states are causally derivable from basic subjective beliefs.
Another phenomenon where we may find something like a split without diminished or destroyed unity is hemi-neglect, the strange phenomenon of losing all sense of one side of one's body or sometimes a part of one side of the body. Whatever it is exactly that is going on in hemi-neglect, unified consciousness remains. It is just that its ‘range’ has been bizarrely circumscribed. It ranges over only half the body (in the most common situation), not seamlessly over the whole body. Where we expect proprioception and perception of the whole body, in these patients they are of (usually) only one-half of the body.
A third candidate phenomenon is what used to be called Multiple Personality Disorder, now, more neutrally, Dissociative Identity Disorder (DID), everything about this phenomenon is controversial, including whether there is any real multiplicity of consciousness at all, but one common way of describing what is going on in at least some central cases is to say that the units if whether we call them persons, personalities, sides of a single personality, or whatever, ‘take turns’, usually with pronounced changes in personality. When one is active, the other(s) is usually(are) not. If this is an accurate description, then here to we have a breach in unity of some kind in which unity is nevertheless not destroyed. Notice that whereas in brain bisection cases the breach, whatever it is like, is synchronic (at a time), here it is diachronic (across time), different unified ‘package’ of consciousness taking turns. The breach consists primarily in some pattern of reciprocal (or sometimes one way) amnesia-some pattern of each ‘package’ not remembering having the experiences or doing the things had or done when another ‘package’ was in charge.
By contrast to brain bisection and DID cases, there are phenomena in which unified consciousness does not seem to split and does seem to be damaged or even destroyed together. In brain bisection and dissociative identity cases, the most that is happening is that unified consciousness is splitting into two or more proportionally intact units-two or more at a time or two or more across time. It is a matter of controversy whether even that is happening, especially in DID cases, but we clearly do not have more than that. In particular, the unity itself does not disappear, although it may split, but we could say, it does not shatter. There are at least three kinds of case in which unity does appear to shatter.
One is some particularly severe form of schizophrenia. Here the victim seems to lose the ability to form an integrated, interrelated representation of his or her world and his or her self together. The person speaks in ‘word salads’ that never get anywhere, indeed sometimes never become complete sentences. The person is unable to put together integrated plans of actions even at the level necessary to obtain sustenance, tend to bodily needs, or escape painful irritants. So on, here, saying that unity of consciousness has shattered seems correct than split. The behaviour of these people seems to express no more than what we might call experience-fragmentation, each lasting a tiny length of time and unconnected to any others. In particular, except for the (usually semantically irrelevant) associations that lead these people from each entry to the next in the word salads they create, to be aware of one of these states is not to be aware of any others-or so to an evidentiary proposition.
In schizophrenia of this sort, the shattering of unified consciousness is part of a general breakdown or deformation of mental functioning: pertain to, desire, belief, even memory all suffer massive distortions. In another kind of case, the normal unity of consciousness seems to be just as absent but there does not seem to be general disturbance of the mind. This is what some researchers call dysexecutive syndrome. What characterizes the breakdown in the unity of consciousness here is that subjects are unable to consider two things together, even things that are directly related to one another. For example, such people cannot figure out whether a piece of a puzzle fits into a certain place even when the piece and the puzzle are both clearly visibly and the piece obviously fits. They cannot crack an egg into a pan. So on.
A disorder presenting similar symptoms is simultagnosia or Balint's syndrome (Balint was an earlier 20th century German neurologist). In this disorder, which is fortunately rare, patients see only one object located at one ‘place’ in the visual field at a time. Outside of a few ‘degrees of arc’ in the visual field, these patients say they see nothing and seem to be receiving no information (Hardcastle, in progress). In both dysexecutive disorder and simultagnosia (if we have two different phenomena here), subjects seem not to be aware of even two items in a single conscious state.
We can pin down what is missing in each case a bit more precisely. Recall the distinction between being conscious of individual objects and having unified consciousness of a number of objects at the same time introduced at the beginning of this article. Broadly speaking, we can think of the two phenomena isolated by this distinction as two stages. First, the mind ties together various sensory information into representations of objects. In contemporary cognitive research, this activity has come to be called binding (Hardcastle 1998 is a good review). Then, the mind ties these represented objects together to achieve unified consciousness of a number of them at the same time. (The first theorist to separate these two stages was Kant, in his doctrine of synthesis.) The first stage continues to be available to dysexecutive and simultagnosia patients: They continue to be aware of individual objects, events, etc. The damage seems to be to the second stage: it is the tying of objects together in consciousness that is impaired or missing altogether. The distinction can be made this way: these people can achieve some (z), unity of focus with respect to individual objects, but little or no unified consciousness of any of the three kinds over a number of objects.
The same distinction can also help make clear what is going on in the severe forms of schizophrenia just discussed. Like dysexecutive syndrome and simultagnosia patients, severe schizophrenics lack the ability to tie represented objects together, but they also seem to lack the ability to form unified representations of individual objects. In a different jargon, these people seem to lack even the capacity for object constancy. Thus their cognitive impairment is much more severe than that experienced by dysexecutive syndrome and simultagnosia patients.
With the exception of brain bisection patients, who do not evidence distortion of consciousness outside of specially contrived laboratory situations, the split or breach occurs naturally in all the patients just discussed. Indeed, they are a central class of the so-called ‘experiments of nature’ that are the subject-matter of contemporary neuropsychology. Since all the patients in whom these problems occur naturally are severely disadvantaged by their situation, this is further evidence that the ability to unify the contents of consciousness is central to proper cognitive functioning.
Is there anything common to the six situations of breakdowns in unified consciousness just sketched? How do they relate to (x), (Y) or (z)?
In brain bisection cases, the key evidence for a duality of some kind is that there are situations in which whatever is aware of some items being represented in the body in question is not aware of other items being represented in that same body at the same time. We looked at two examples of the phenomenon connection with the word TAXABLE and the doing of arithmetic. With respect to these represented items, there is a significant and systematically extendable situation in which to be aware of some of these items is not to be aware of others of them. This seems to be what motivates the judgment in us that these patients’ evidence a split in unified consciousness. If so, brain bisection cases are a straightforward case of a failure to meet the conditions for (x). However, they are more than that. Because the ‘centres of consciousness’ created in the lab do not communicate with one another except in the way that any mind can communicate with any other mind, there is also a breakdown in (Y). One subject of experience aware of itself as the single common subject of its experience seems to become two (in some measure at least).
In DID cases, and a central feature of the case is some pattern of amnesia. Again, this is a situation in which being conscious of some represented objects goes with not being conscious of others in a systematic way. The main difference is that the breach is at a time in brain bisection cases, across time in DID cases. So again the breakdown in unity consists in a failure to meet the conditions for (x). However, DID cases for being diachronic, there is also a breakdown in (Y) across time-though there is continuity across time within each personality, there seems to be little or no continuity, conscious continuity at any rate, from one to another.
The same pattern is evident in the cases of severe schizophrenia, dysexecutive disorder and simultagnosia that we considered. In all three cases, consciousness of some items goes with lack of consciousness of others. In these cases, to be aware of a given item is precisely not to be aware of other relevant items. However, in the severe schizophrenia cases we considered, there is also a failure to meet the conditions of (z).
Hemi-neglect is a bit different. Here we do not have in company of two or more ‘packages’ of consciousness and we do not have individual conscious states that are not unified with other conscious states. Not, as far as we know-for there to be conscious states not unified with the states on which the patient can report, there would have to be consciousness of what is going on in the side neglected by the subject with whom we can communicate and there is no evidence for this. Here none of the conditions for (x), (y) or (z) fail to be met-but that may be because hemi-neglect is not a split or a breakdown in unified consciousness in the first place. It may be simply a shrinking of the range of phenomena over which otherwise intact unified consciousness amplifies.
It is interesting that none of the breakdown cases we have considered evidence damage to or destruction of the unity in (y). We have seen cases in which unified consciousness it might split at a time (brain bisection cases) or over time (DID cases) but not cases in which the unity itself is significantly damaged or destroyed. Nor is our sample unrepresentative; the cases we have considered are the most widely discussed cases in the literature. There do not seem to be many cases in which saying that is plausible (y), awareness of oneself as a single common subject, has been damaged or destroyed.
After a long hiatus, serious work on the unity of consciousness began in recent philosophy with two books on Kant, P. F. Strawson (1966) and Jonathan Bennett (1966). Both of them had an influence far beyond the bounds of Kant scholarship. Central to these works is an exploration of the relationship between unified consciousness, especially unified consciousness of self, and our ability to form an integrated, coherent representation of the world, a linkage that the authors took to be central to Kant's transcendental deduction of the categories. Whatever the merits of the claim for a sceptical judgment, their work set off a long line of writings on the supposed link. Quite recently the approach prompted a debate about unity and objectivity among Michael Lockwood, Susan Hurley and Anthony Marcel in Peacocke (1994).
Another point in fact, are the issues that led philosophers back within the unity of consciousness, is, perhaps, the next historicity, for which had the neuropsychological results of brain bisection operations, only that we can explore at an earlier time. Starting with Thomas Nagel (1971) and continuing in the work of Charles Marks (1981), Derek Parfit (1971 and 1984), Lockwood (1989), Hurley (1998) and many others, these operations have been a major themes in work on the unity of consciousness since the 1970s. Much ink has been spilled on the question of what exactly is going on in the phenomenology of brain bisection patients. Nagel goes insofar as to claim that there is no whole number of ‘centres of consciousness’ in these patients: There is too much unity to say "two,” yet too much splitting to say "one.”
Some recent work by Jocelyne Sergent (1990) might seem to support this conclusion. She found, for example, that when a sign ‘6’ was sent to one hemisphere of the brain in these subjects and a sign ‘7’ was sent to the other in such a way that a crossover of information from one hemisphere to the other was extremely unlikely, they could say that the six is a smaller number than the seven but could not say whether the signs were the same or different. It is not certain that Sergent's work does support Nagel's conclusions. First, Sergent's claims are controversial-not, but all researchers have been able to replicate them. Second, even if the data are good, the interpretation of them is far from straightforward. In particular, they seem to be consistent with there being a clear answer to any precise ‘one or two?’ Question that we could ask. (’Unified consciousness of the two signs with respect to numerical size?’ Yes. ‘Unified consciousness of the visible structure of the signs?’ No). If so, the fact that there is obviously mixed evidence, some pointing to the conclusion ‘one’, some pointing to the conclusion ‘two’, supports the view expressed by Nagel that there may be no whole number of subjects that these patients are.
Much of the work since Nagel has focussed on the same issue of the kind of split that the laboratory manipulation of brain bisection patients induces. Some attention has also been paid to the implications of these splits. For example, could one hemisphere commit a crime in such a way that the other could not justifiably be held responsible for it? Or, if such splitting occurred regularly and was regularly followed by merging with ‘halves’ from other splits, what would the implications are for our traditional notion of what philosophers call ‘personal identity’, namely, being or remaining one and the same thing. (Here we are talking about identity in the philosopher's sense of being or remaining one things, not in the sense of the term that psychologists use when they talk of such things as ‘identity crises’.)
Parfit has made perhaps the largest contributions to the issue of the implications of brain bisection cases for personal identity. Phenomena relevant to identity in things others than persons can be a matter of degree. This is well illustrated by the famous ship of Theseus examples. Suppose that over the years, a certain ship in Theseus was rebuilt, boards by board, until every single board in it has been replaced. Is the ship at the end of the process the ship that started the process or not? Now suppose that we take all those rotten, replaced boards and reassemble them into a ship? Is this ship the original ship of Theseus or not? Many philosophers have been certain that such questions cannot arise for persons; identity in persons is completely clear and unambiguous, not something that could be a matter of degree as related phenomena obviously can be with other objects is a well-known example. As Parfit argues, the possibility of persons (or at any rate minds) splitting and fusing puts real pressure on such intuitions about our specialness; perhaps the continuity of persons can be as partial and tangled as the continuity of other middle-sized objects.
Lockwood's exploration of brain bisections cases go off in a different direction, two different directions in fact (we will examine the second below). Like Nagel, Marks, and Parfit, Lockwood has written on the extent to which what he calls ‘co-consciousness’ can split. (’Co-consciousness’ is the term that many philosophers now use for the unity of consciousness; Roughly, two conscious states are said to be co-conscious when they are related to another as finding conscious states are related of yet to another in unified consciousness.) He also explores the possibility of psychological states that are not determinately in any of the available ‘centres of consciousness’ and the implications of this possibility for the idea of the specious present, the idea that we are directly and immediately aware of a certain tiny spread of time, not just the current infinitesimal moment of time. He concludes that the determinateness of psychological states being in an available ‘centre of consciousness’ and the notion that psychological states spread over at least a small amount of time in the specious might present stand or fall together.
Some philosopher’s interests in pathologies of unified consciousness examine more than brain bisection cases. In what is perhaps the most complex work on the unity of consciousness to date, Hurley examines most of the kinds of breakdown phenomena that we introduced earlier. She starts with an intuitive notion of co-consciousness that she does not formally define. She then explores the implications of a wide range of ‘experiments of nature’ and laboratory experiments for the presence or absence of co-consciousness across the psychological states of a person. For example, she considers acallosal patients (people born without a corpus callosum). When present, the corpus callosum is the chief channel of communication between the hemispheres. When it is cut, generating what looks like a possibility that two centres of consciousness, two internally co-conscious systems that are not co-consciousness with one another. Hurley argues that in patients in whom it never existed, things are not so clear. Even though the channels of communication in these patients are often in part external (behavioural cuing activity, etc.), the result may still be a single co-conscious system. That is to say, the neurological and behavioural basis of unified consciousness may be very different in different people.
Hurley also considers research by Trewarthen in which a patient is conscious of some object seen by, say, the right hemisphere until her left hand, which is controlled by the right hemisphere, reaches for it. Somehow the act of reaching for it seems to obliterate the consciousness of it. Very strange-how can something pop into and disappear from unified consciousness in this way? This leads her to consider the notion of partial unity. Could two centres of consciousness be as integrated in ‘A’, only to find of its relation to ‘B’, though not co-conscious with one another, nonetheless these of them is co-conscious with some third thing, e.g., the volitional system B (the system of intentions, desires, etc.?). If so, ‘co-conscious’ is not a transitive relationship-‘A’ is co-conscious with ‘B’ and ‘C’ could be co-conscious with B without A being co-conscious with ‘C’. This is puzzling enough. Even more puzzling would be the question of how activation of the system ‘B’ with which both ‘A’ and ‘C’ are co-conscious could result in either ‘A’ or ‘C’ ceasing to be conscious of an object aimed at by ‘B’.
Hurleys’ response to Trewarthen's cases (and Sergent's cases that we examined in the previous section) is to accept that intention can obliterate consciousness and then distinguish times. At any given time in Trewarthen's cases, the situation with respect to unity is clear. That the picture does not conform to our usual expectations for diachronic singularity or transitivity then becomes simply an artefact of the cases, not a problem. It is not made clear how this reconciles Sergent's evidence with unity. One strategy would be that the one we considered earlier was of making questions in incomparably precise comprehension. For precise questions, there seems to be a coherent answer about unity for every phenomenon Sergent describes.
Hurleys’ consideration of what she calls Marcel's case. Here subjects are asked to report the appearance of some item in consciousness in three ways at the same time-say, by blinking, pushing a button, and saying, ‘I see it’. Remarkably, any of these acts can be done without the other two. The question is, What does this allude to unified consciousness? In a case in which the subject pushes the button but neither blinks nor says anything, for example, is the hand-controller aware of the object while the blink-controller and the speech-controller are not? How could the conscious system become fragmented in such a way?
Hurleys’ stipulation is that they cannot. What induces the appearance of incoherence about unity is the short time scale. Suppose that it takes some time to achieve unified consciousness, perhaps because some complex reaction’s processes are involved. If that were the case, then we do not have a stable unity situation in Marcel's case. The subjects are not given enough time to achieve unified consciousness of any kind.
There is a great deal more to Hurley's work. She urges, for example, that theirs a normative dimension to unified consciousness-conscious states have to cohere for unified consciousness to result. Systems in the brain have to achieve her calls ‘dynamic singularity’-being a single system-for unified consciousness to result.
A third issue that got philosophers working on the unity of consciousness again is binding. Here the connection is more distant because binding as usually understood is not unified consciousness as we have been discussing it. Recall the two stages of cognition laid out earlier. First, the mind ties together various sensory information into representations of objects. Then the mind ties these represented objects to one other to achieve unified consciousness of a number of them at the same time. It is the first stage that is usually called binding. The representations that result at this stage need not be conscious in any of the ways delineating earlier-many perfectly good representations affect behaviour and even enter memory without ever becoming conscious. Representations resulting from the second stage need not be conscious, either, but when they are, we have at least some of the kinds of unified consciousness delineated.
In the past few decades, philosophers have also worked on how unified consciousness relates to the brain. Lockwood, for example, thinks that relating consciousness to matter will involve more issues on the side of matter than most philosophers think. (We mentioned that his work goes off in two new directions. This is the second one.) Quantum mechanics teach us that the way in which observation links to physical reality is a subtle and complex matter. Lockwood urges that our conceptions will have to be adjusted on the side of matter as much as on the side of mind if we are to understand consciousness as a physical phenomenon and physical phenomena as open to conscious observation. If it is the case not only that our understanding of consciousness is affected by how we think it might be implemented in matter but also that process of matter is affected by our (conscious) observation of them, then our picture of consciousness stands as ready to affect our picture of matter as vice-versa.
The Churchlands, Paul M. and Patricia S. and Daniel Dennett (1991) has radical views of the underlying architecture of unified consciousness. The Churchlands see unity itself much as other philosophers do. They do argue that the term ‘consciousness’ covers a range of different phenomena that need to be distinguished from another but the important point that presents to some attending characteristic is that they urge that the architecture of the underlying processes probably consist not of transformations of symbolically encoded objects of representations, as most philosophers have believed, but of vector transformations in what are called phase spaces. Dennett articulates an even more radical view, encompassing both unity and underlying architecture. For him, unified consciousness is simply a temporary ‘virtual captain’, a small group of related information-parcels that happens to gain temporary dominance in a struggle for control of such cognitive activities as self-monitoring and self-reporting in the vast array of microcircuits of the brain. We take these transient phenomena to be more than they are because each of them holds to some immediacy of ‘me’, particularly of the moment; The temporary coalition of conscious states winning at the moment is what I am, is the self. Radical implementation, narrowed range and transitoriness notwithstanding, when unified consciousness is achieved, these philosophers tend to see it in the way we have presented it.
Dennett's and the Churchlands' views fit naturally with a dynamic systems view of the underlying neural implementation. The dynamic systems view is the view that unified consciousness is a result of certain self-organizing activities in the brain. Dennett thinks that given the nature of the brain, a vast assembly of neurons receiving electrochemical signals from other neurons and passing such signals to yet other neurons, cognition could not take any form other than something like a pandemonium of competing bits of content, the ones that win the competitions being the ones that are conscious. The Churchlands nonexistence tends to agree with Dennett about this. They see consciousness as a state of the brain, the ‘wet-ware’, not a result of information processing, of ‘software’. They also advocate a different picture of the underlying neurological process. As we said, they think that transformations of complex vectors in a multi-dimensional phase space are the crucial processes, not competition among bits of content. However, they agree that it is very unlikely that the processes that subserve unified consciousness are sentence-like or language-like at all. It is too early to say whether these radically novel pictures of what the system that implements unified consciousness is like will hold any important implications for what unified consciousness is or when it is present.
Hurley is also interested in the relationship of unified consciousness to brain physiology. Saying it of her that she resists certain standard ways of linking them would be truer, however, than to say that she herself links them. In particular, while she clearly thinks that physiological phenomena have all sorts of implications and give rise to all sorts of questions about the unity of consciousness, she strongly resists any simplistic patterns of connection. Many researchers have been attracted by some variant of what she calls the isomorphism hypothesis. This is the idea that changes in consciousness will parallel changes in brain structure or function. She wants to insist, to the contrary, that often two instances of the same change in consciousness will go with very different changes in the brain. We saw an example in the last section. In most of us, unified consciousness is closely linked to an intact, functioning corpus callosum. However, in acallosal people, there may be the same unity but achieved by mechanisms such as cuing activity external to the body that are utterly different from communication though a corpus callosum. Going the opposite way, different changes in consciousness can go with the same changes to structure and function in the brain.
Two philosophers have gone off in directions different from any of the above, Stephen White (1991) and Christopher Hill (1991). White's main interest is not the unity of consciousness as such but what one might call the unified locus of responsibility-what it is that ties something together to make it a single agent of actions, i.e., something to which attributions of responsibility can appropriately be made. He argues that unity of consciousness is one of the things that go into becoming unified as such an agent but not the only thing. Focussed coherent plans, a continuing single conception of the good, with reason of a good autobiographical memory, certain future states of persons mattering to us in a special way (mattering to us because we take them to be future states of ourselves, one would say if it were not blatantly circular), a certain continuing kind and degree of rationality, certain social norms and practices, and so forth. In his picture of moral responsibility, unbroken unity of consciousness at and over time is only a small part of the story.
Hills’ fundamental claim is that a number of different relationships between psychological states have a claim to be considered unity relationships, including: Being owned by the same subject, being [phenomenally] next to (and other relationships that state in the field of consciousness appear to have to one another), as both embrace the singularity of objects contained of other conscious states, and jointly having the appropriate sorts of effects (functions). An interesting question, one that Hill does not consider, is whether all these relations are what interests us when we talk about the unity of consciousness or only some of them (and if only some of them, which ones). Hill also examines scepticism about the idea that clearly bounded individual conscious states exist. Since we have been assuming throughout that such states do exist, it is perhaps fortunate that Hill argues that we could safely do so.
In some circles, the idea that consciousness has a special kind of unity has fallen into disfavour. Nagel (1971), Donald Davidson (1982), and Dennett (1991) have all urged that the mind's unity has been greatly overstated in the history of philosophy. The mind, they say, works mostly out of the sight and the control of consciousness. Moreover, even states and acts of ours that are conscious can fail to cohere. We act against what we know perfectly well to be our own most desired courses of action, for example, or do things while telling ourselves that we must avoid doing them. There is an approach to the small incoherencies of everyday life that does not requires us to question whether consciousness is unified in this way, the Freudian approach (e.g., Freud 1916/17). This approach accepts that the unity of consciousness exists much as it presents itself but argues that the range of material over which it extends is much smaller than philosophers once thought. This latter approach has some appeal. If something is out of sight and/or control, it is out of the sight or control of what? The answer would seem to be, the unified conscious mind. If so, the only necessary difference among the pre-twentieth centuries visions of unified consciousness as ranging over everything in the mind and our current vision of unified consciousness is that the range of psychological phenomena over which unified consciousness ranges has shrunk.
A final historical note. At the beginning of the 21st century, work on the unity of consciousness continues apace. For example, a major conference was recently devoted to the unity of consciousness, the Association for the Scientific Study of Consciousness Conference assembled inside Brussels in 2000, and the Encyclopaedias of philosophy (such as this one) and of cognitive science are commissioning articles on the topic. Psychologists are taking up the issue. Bernard Baars (1988, 1997) notion of the global workspace is an example. Another example is work on the role of unified consciousness in precise control of attention. However, the topic is not yet at the centre of consciousness studies. One illustration of this is that it can still be missing entirely in anthologies of current work on consciousness.
With a different issue, philosophers used to think that the unity of consciousness has huge implications for the nature of the mind, indeed entails that the mind could not be made out of matter. We also saw that the prospects for this inference are not good. What about the nature of consciousness? Does the unity of consciousness have any implications for this issue?
There are currently at least three major camps on the nature of consciousness. One camp sees the ‘felt quality’ of representations as something unique, in particular as quite different from the power of representations to change other representations and shape belief and action. On this picture, representations could function much as they do without it being like anything to have them. They would merely not be conscious. If so, consciousness may not play any important cognitive role at all, its unity included (Jackson 1986; Chalmers 1996). A second camp holds, to the contrary, that consciousness is simply a special kind of representation (Rosenthal 1991, Dretske 1995, and Tye 1995). A third holds that what we label ‘consciousness’ are really something else. On this view, consciousness will in the end be ‘analysed away’-the term is too coarse-grained and presents things in too unquantifiable a way to have any use in a mature science of the mind.
The unity of consciousness obviously has strong implications for the truth or falsity of any of these views. If it is as central and undeniable as many have suggested, its existence may cut against the eliminativist position. With respect to the other positions, in that the unity of consciousness seems neutral.
Whatever its implications for other issues, the unity of consciousness seems to be a real feature of the human mind, indeed central to it. If so, any complete picture of the mind will have to provide an account of it. Even those who hold that the extent to which consciousness is unified has been overrated owing us and account of what has been overrated.
To say one has an experience that is conscious (in the phenomenal sense) is to say, that one is in a state of its seeming to one some way. In another formulation, to say experience is conscious is to say that there is something that stands alone, like for only one to have. Feeling pain and sensing colours are common illustrations of phenomenally conscious states. Consciousness has also been taken to consist in the monitoring of one's own state of mind (e.g., by forming thoughts about them, or by somehow "sensing" them), or else in the accessibility of information to one's capacity for rational control or self-report. Intentionality has to do with the directedness or aboutness of mental states-the fact that, for example, one's thinking is of or about something. Intentionality includes, and is sometimes taken to be equivalent to, what is called “mental representation.”
It can seem that consciousness and intentionality pervade mental life -perhaps, but one or both somehow constitute what it is to have a mind. But achieving an articulate general understanding of either consciousness or intentionality presents, an enormous challenge, part of which lies in figuring out how the two are related. Is one in some sense derived from or dependent on the other? Or are they perhaps quite independent and separate aspects of mind?
One frequent understanding among philosophers, that consciousness is a certain feature shared by sense-experience and imagery, perhaps belonging also to a broad range of other mental phenomena (e.g., episodic thought, memory, and emotion). It is the feature that consists in its seeming some way to one to have experiences. To put it another way: Conscious states are states of its seeming somehow to a subject.
For example, it seems to you some way to see red, and seems to you in another way, to hear a crash, to visualize a triangle, and to suffer pain. The sense of ‘seems’ relevant here may be brought out by noting that, in the last example, we might just as well speak of the way it feels to be in pain. And-some may say-in the same sense, it seems to you some way to think through the answer to a math problem, or to recall where you parked the car, or to feel anger, shame, or elation. (However, that it is not simply to be assumed that saying it seems some way to you to have an experience is equivalent to saying that the experience itself seems or appears some way to you-that it, is-an object of appearance. The point is just that the way something sounds to you, the way something looks to you, etc., all constitute ‘ways of seeming.’) States that are conscious in this sense are said to have some phenomenal character or other-their phenomenal character being the specific way it seems to one to have a given experience. Sometimes this is called the ‘qualitative’ or ‘subjective’ character of experience.
Another oft-used means for trying to get at the relevant notion of consciousness, preferable to some, is to say that there is, in a certain sense, always ‘something it is like’ to be in a given conscious state-something it has, in the like for one who is in that state. Relating the two locutions, we might say: There is something it is like for you to see red, to feel pain, etc., and the way it seems to you to have one of these experiences is what it is like for you to have it. The phenomenal character of an experience then, is what someone would inquire about by asking, e.g., ‘What is it like to experience orgasm?’-and it is what we speak of when we say that we know what that is like, even if we cannot convey this to one who does not know. And, if we want to speak of persons, or other creatures (as distinct from their states) being conscious, we will say that they are conscious just if there is something it is like for them to be the creature they are-for example, something it is like to be a nocturnal creature as inferred too as a bat.
The examples of conscious states given comprise a various lot. But some sense of their putative unity as instances of consciousness might be gained by contrasting them with what we are inclined to exclude, or can at least conceive of excluding, from their company. Much of what goes on, but we would ordinarily believe is not (or at any rate, we may suppose is not) conscious in the sense at issue. The leaf's fall from a tree branch, we may suppose, is not a conscious state of the leaf-a state of its seeming somehow to the leaf. Nor, for that matter, is a person falling off a branch held of a conscious state-is rather the feeling of falling the sort of consciousness, if anything is. Dreaming of falling would also be a conscious experience in this sense. But, while we can in some way be said to sense the position of our limbs even while dreamlessly asleep, we may still suppose that this proprioception (though perhaps in some sense a mental or cognitive affair) is not conscious-we may suppose that it does not then seem (or feel) any way to us sleepers to sense our limbs, as ordinarily it does when we are awake.
The way of seeming’ or ‘what it is like’ conception of consciousness I have just invoked is sometimes marked by the term ‘phenomenal consciousness.’ But this qualifier ‘phenomenal’ suggests that there are other kinds of consciousness (or perhaps, other senses of ‘consciousness’). Indeed there are, at least, other ways of introducing notions of consciousness. And these may appear to pick out features or senses altogether distinct from that just presented. For example, it is said that some (but not all) that goes on in the mind is ‘accessible to consciousness.’ Of course this by itself does not so much specifies a sense of ‘conscious’ as put one in use. (One will want to ask: And just what is this ‘consciousness’ that has ‘access’ to some mental goings-on but not others, and what could ‘access’ efforts that mean in of having it anyway? However, some have evidently thought that, rather than speak of consciousness as what has access, we should understand consciousness as itself a certain kind of susceptibility to access. For example, Daniel Dennett (1969) once theorized that one's conscious states are just those whose contents are available to one's direct verbal report-or, at least, to the ‘speech centre’ responsible for generating such reports. And Ned Form (1995) has proposed that, on one understanding of ‘conscious,’ (to be found at work in many ‘cognitive’ theories of consciousness) a conscious state is just a ‘representation poised for free use in reasoning and other direct ‘rational’ control of action and speech.’ Form labels consciousness in this sense ‘excess consciousness.’
Forms’ would insist that we should distinguish phenomenal consciousness from ‘excess consciousness’, and he argues that a mental representation's being poised for use in reasoning and rational control of action is neither a necessary nor a sufficient condition for the state's being phenomenally conscious. Similarly he distinguishes phenomenal consciousness from what he calls ‘reflexive consciousness’-where this has to do with one's capacity to represent one's mind's to oneself-to have, for example, thoughts about one's own thoughts, feelings, or desires. Such a conception of consciousness finds some support in a tendency to say that conscious states of mind are those one is ‘conscious of’ or ‘aware of’ being in, and to interpret this ‘of’ to indicate some kind of reflexivity is involved-wherein one represents one's own mental representations. On one prominent variant of this conception, consciousness is taken to be a kind of scanning or perceiving of one's own psychological states or processes-an ‘inner sense.’
Forming a threefold division of our phenomenon, whereby its access, and reflexive consciousness need not be taken to reflect clear and coherent distinctions already contained in our pre-theoretical use of the term ‘conscious.’ Form seems to think that (on the contrary) our initial, ordinary use of ‘conscious’ is too confused even to count as ambiguous. Thus in articulating an interpretation, or set of interpretations, of the term adequate to frame theoretical issues, we cannot simply describe how it is currently employed-we must assign it a more definite and coherent meaning than extant in common usage.
Whether or not this is correct, getting a solid ground here is not easy, and a number of theorists of consciousness would balk at proceeding on the basis of Form's proposed threefold distinction. Sometimes the difficulty may be merely terminological. John Searle, for example, would recognize phenomenal consciousness, but deny Form's other two candidates are proper senses of ‘conscious’ at all. The reality of some sort of access and reflexivity is apparently not at issue-just whether either captures a sense of ‘conscious’ (perhaps confusedly) woven into our use of the term. However, in contrast to both Form and Searle, there are also those who raise doubt that there is a properly phenomenal sense we can apply, distinct from both of the other two, for us to pick out with any term. This is not just a dispute about words, but about what there is for us to talk about with them.
The substantive issues here are very much bound up with differences over the proper way to conceive of the relationship between consciousness and intentionality. If there are distinct senses in which states of mind could be correctly said to be ‘conscious’ (answering perhaps to something like Form's three-fold distinction), then there will be distinct questions we can pose about the relation between consciousness and intentionality. But if one of Form's alleged senses is somehow fatally confused, or if he is wrong to distinguish it from the others, or if it is the sense of no term we can with warrant apply to ourselves or our states, then there will be no separate question in which it figures we should try to answer. Thus, trying to work out a reasoned view about what we are (or should be) talking about when we talk about consciousness is an unavoidable and non-trivial part of trying to understand the relation between consciousness and intentionality.
To clarify further the disputes about consciousness and their links to questions about its relation to intentionality, we need to get an initial grasp of the relevant way the terms ‘intentionality’ and ‘intentional’ are used in philosophy of mind.
Previously, some indication of why it is difficult to get a theory of consciousness started. While the term ‘conscious’ is not esoteric, its use is not easily characterized or rendered consistent in a manner providing some uncontentious framework for theoretical discussion. Where the term ‘intentional’ is concerned, we also face initially confusing and contentious usage. But here the difficulty lies partly in the fact that the relevant use of cognate terms is simply not that found in common speech (as when we speak of doing something ‘intentionally’). Though ‘intentionality,’ in the sense here at issue, does seem to attach to some real and fundamental (maybe even defining) aspect of mental phenomena, the relevant use of the term is tangled up with some rather involved philosophical history.
One way of explaining what is meant by ‘intentionality’ in the (more obscure) philosophical sense is this: it is that aspect of mental states or events that consists in their being of or about things, as pertains to the questions, ‘What are you thinking of?’ And, what are you thinking about?’ Intentionality is the aboutness or directedness of mind (or states of mind) to things, objects, states of affairs, events. So if you are thinking about San Francisco, or about the increased cost of living there, or about your meeting someone there at Union Square-your mind, your thinking, is directed toward San Francisco, or the increased cost of living, or the meeting in Union Square. To think at all is to think of or about something in this sense. This ‘directedness’ conception of intentionality plays a prominent role in the influential philosophical writings of Franz Brentano and those whose views developed in response to his.
But what kind of ‘aboutness’ or ‘of-ness’ or ‘directedness’ is this, and to what sorts of things does it apply? How do the relevant ‘intentionality-marking’ senses of these words (‘about,’ ‘of,’ ‘directed’) differ from? : the sense in which the cat is wandering ‘about’ the room; the sense in which someone is a person ‘of’ high integrity; the sense in which the river's course is ‘directed’ toward the fields?
It has been said that the peculiarity of this kind of directedness/aboutness/of-ness lies in its capacity to relate thought or experience to objects that (unlike San Francisco) do not exist. One can think about a meeting that has not, or will never occur; One can think of Shangri La, or El Dorado, or the New Jerusalem, as one may think of their shining streets, of their total lack of poverty, or their citizens' peculiar garb. Thoughts, unlike roads, can lead to a city that is not there.
But to talk in this way only invites new perplexities. Is this to say (with apparent incoherence) that there are cities that do not exist? And what does it mean to say that, when a state of mind is in fact directed toward’ something that does exist, that state nevertheless could be directed toward something that does not exist? It can well seem to be something very fundamental to the nature of mind that our thoughts, or states of mind more generally, can be of or about things or ‘point beyond themselves.’ But a coherent and satisfactory theoretical grasp of this phenomenon of ‘mental pointing’ in all its generality is difficult to achieve.
Another way of trying to get a grip on the topic asks us to note that the potential for a mental directedness toward the non-existent be evidently closely associated with the mind's potential for falsehood, error, inaccuracy, illusion, hallucination, and dissatisfaction. What makes it possible to believe (or even just suppose) something about Shangri La is that one can falsely believe (or suppose) that something exists? In the case of perception, what makes it possible to seem to see or hear what is not there is that one's experience may in various ways be inaccurate, non-existent, subject to illusion, or hallucinatory. And, what makes it possible for one's desires and intentions to be directed toward what does not and will never exist is that one’s desire and intentions can be unfulfilled or unsatisfied. This suggests another strategy for getting a theoretical hold on intentionality, employing a notion of satisfaction, stretched to encompass susceptibility to each of these modes of assessment, each of these ways in which something can either go right, or go wrong (true/false, veridical/nonveridical, fulfilled/unfulfilled), and speak of intentionality in terms of having ‘conditions of satisfaction.’ On John Searle's (1983) conception, intentional states are those having conditions of satisfaction. What are conditions of satisfaction? In the case of belief, these are the conditions under which the belief is true; Even so, the instance of perception, they are the conditions under which sense-experience is veridical: In the case of intention, the conditions under which an intention is fulfilled or carried out.
However, while the conditions of satisfaction approach to the notion of intentionality may furnish an alternative to introducing this notion by talking of ‘directedness to objects,’ it is not clear that it can get us around the problems posed by the ‘directedness’ talk. For instance, what are we to say where thoughts are expressed using names of nonexistent deities or fictional characters? Will we do away with a troublesome directedness to the nonexistent by saying that the thoughts that Zeus is Poseidon's brother, and that Hamlet is a prince, is just false? This is problematic. Moreover, how will we state the conditions of satisfaction of such thoughts? Will this not also involve an apparent reference to the nonexistent?
A third important way of conceiving of intentionality, one particularly central to the analytic tradition derived from the study of Frege and Russell whom asks us to concentrate on the notion of mental (or intentional) content. Often, it is assumed to have intentionality is to have content. And frequently mental content is otherwise described as representational or informational content-and ‘intentionality’ (at least, as this applies to the mind) is seen as just another word for what is called ‘mental representation,’ or a certain way of bearing or carrying information.
But what is meant by ‘content’ here? As a start we may note: The content of thought, in this sense, is what, is reported when answering the question, What does she think? By something of the form, ‘She thinks that p.’ And the content of thought is what two people are said to share, when they are said to think the same thought. (Similarly, that contents of belief are what two persons commonly share when they hold the same belief.) Content is also what may be shared in this way even while ‘psychological modes’ of states of mind may differ. For example: Believing that I will soon be bald and fearing that I will soon be a bald share in that the content of bald shares that I will soon be bald.
Also, commonly, content is taken as not only that which is shared in the ways illustrated, but that which differs in a way revealed by considering certain logical features of sentences we use to talk about states of mind. Notably: the constituents of the sentence that fills in for ‘p’ when we say ‘x thinks that p’ or ‘x believes that p’ are often interpreted in such a way that they display ‘failures of substitutivity’ of (ordinarily) co-referential or co-extensional expressions, and this appear to reflect differences in mental content. For example: if George W. Bush is the eldest son of the vice-president under Ronald Reagan, and George W. Bush is the current US. President, then it can be validly inferred that the eldest son of Reagan's vice-president is the current US President. However, we cannot always make the same sort of substitutions of terms when we use them to report what someone believes. From the fact that you believe that George W. Bush is the current US. President, we cannot validly infer that you believe that the eldest son of Reagan's vice-president is the current US. President. That last may still be false, even if George W. Bush is indeed the eldest son. These logical features of the sentences ‘x believes that George W. Bush is the current US. President’ and ‘x believe that George W. Bush is the eldest son of Reagan's vice-president’ seem to reflect the fact that the beliefs reported by their use have different contents: these sentences are used by someone to state what is believed (the belief content), and what is believed in each case is not just the same. Someone's belief may have the one content without having the other.
Similar observations can be made for other intentional states and the reports made of them-especially when these reports contain an object clause beginning with ‘that’ and followed by a complete sentence (e.g., she thinks that p; He intends that p; She hopes that p and the fear that p; She sees that p). Sometimes it is said that the content of the states is ‘given’ by such a ‘that p’ clause when ‘p’ is replaced by a sentence-the so-called ‘content clause.’
This ‘possession of content’ conception of intentionality may be coordinated with the ‘conditions of satisfaction’ conception roughly as follows. If states of mind contrast in respect of their satisfaction (say, one is true and the other false), they differ in content. (One and the same belief content cannot be both true and false-at least not in the same context at the same time.) And if one says what the intentional content of a state of mind is, one says much or perhaps all of what conditions must be met if it is to be satisfied-what its conditions of truth, or veridicality, or fulfilment, are. But one should be alert to how the notion of content employed in a given philosopher's views is heavily shaped by these views. One should note how commonly it is held that the notion of the finding representation of content is of that way of an ambiguous or in need of refinement. (Consider, for example: Jerry Fodor's) defence of a distinction between ‘narrow’ and ‘wide’ content, as Edward Zalta’s characterlogical distinction between ‘cognitive’ and ‘objective content’ (1988), and that of John Perry's distinction between ‘reflexive' and ‘subject-matter content’.
It is arguable that each of these gates of entry into the topic of intentionality (directedness, condition of satisfaction, and mental content) opens onto a unitary phenomenon. But evidently there is also considerable fragmentation in the conceptions of both consciousness and intentionality that are in the field. To get a better grasp of some of the ways the relationship between consciousness and intentionality can be viewed, without begging questions or trying to present a positive theory on the topic, it is useful to take a look at the recent history of thinking about intentionality, in a way that will bring several issues about its relationship with consciousness to the fore. Together with the preceding discussion, this should provide the background necessary for examining some of the differences that divide those who theorize about consciousness that is very intimately involved with views of the consciousness-intentionality relation.
If we are to acknowledge the extent to which the notion of intentionality is the creature of philosophical history, we have to come to terms with the divide in twentieth century western philosophy between so-called ‘analytic’ and ‘continental’ philosophical traditions. Both have been significantly concerned with intentionality. But differences in approach, vocabulary, and background assumptions have made dialogue between them difficult. It is almost inevitable, in a brief exposition, to give largely independent summaries of the two. We will start with the ‘continental’ side of the story-more, specifically, with the Phenomenological movement in continental philosophy. However, while these traditions have developed without a great deal of intercommunication, they do have common sources, and have come to focus on issues concerning the relationship of consciousness and intentionality that are recognizably similar.
A thorough look at the historical roots of controversies over consciousness and intentionality would take us farther into the past than it is feasible to go in this article. A relatively recent, convenient starting point would be in the philosophy of Franz Brentano. He more than any other single thinker is responsible for keeping the term ‘intentional’ alive in philosophical discussions of the last century or so, with something like its current use, and was much concerned to understand its relationship with consciousness. However, it is worth noting that Brentano himself was very aware of the deep historical background to his notion of intentionality: He looked back through scholastic discussions (crucial to the development of Descartes' immensely influential theory of ideas), and ultimately to Aristotle for his theme of intentionality. One may go further back, to Plato's discussion (in the Sophist, and the Theaetetus) of difficulties in making sense of false belief, and yet further still, to the dawn of Western Philosophy, and Parmenides' attempt to draw momentous consequences from his alleged finding that it is not possible to think or speak of what is not.
In Brentano's treatment what seems crucial to intentionality is the mind's capacity to ‘refer’ or be ‘directed’ to objects existing solely in the mind-what he called ‘mental or intentional inexistence.’ It is subject to interpretation just what Brentano meant by speaking of an object existing only in the mind and not outside of it, and what he meant by saying that such ‘immanent’ objects of thought are not ‘real.’ He complained that critics had misunderstood him here, and appears to have revised his position significantly as his thought developed. But it is clear at least that his conception of intentionality is dominated by the first strand in thought about intentionality mentioned above-intentionality as ‘directedness toward an object’-and whatever difficulty that brings in the point.
Brentano's conception of the relation between consciousness and intentionality can be brought out partly by noting he held that every conscious mental phenomenon is both directed toward an object, and always (if only ‘secondarily’) directed toward itself. (That is, it includes a ‘presentation’-and ‘inner perception’-of itself). Since Brentano also denied the existence of unconscious mental phenomena, this amounts to the view that all mental phenomena are, in a sense ‘self-presentational.’
His lectures in the late nineteenth century attracted a diverse group of central European intellectuals (including that great promoter of the unconscious, Sigmund Freud) and the problems raised by Brentano's views were taken up by a number of prominent philosophers of the era, including Edmund Husserl, Alexius Meinong, and Kasimir Twardowski. Of these, it was Husserl's treatment of the Brentanian theme of intentionality that was to have the widest philosophical influence on the European Continent in the twentieth century-both by means of its transformation in the hands of other prominent thinkers who worked under the aegis of ‘phenomenology’-such as Martin Heidegger, Jean-Paul Sartre, and Maurice Merleau-Ponty-and through its rejection by those embracing the ‘deconstructionism’ of Jacques Derrida.
In responding to Brentano, Husserl also adopted his concern with properly understanding the way in which thought and experience are “directed toward objects.” Husserl criticized Brentano's doctrine of ‘inner perception,’ and did not deny (even if he did not affirm) the reality of unconscious mentation. But Husserl retained Brentano's primary focus on describing conscious ‘mental acts.’ Also he believed that knowledge of one's own mental acts rests on an ‘intuitive’ apprehension of their instances, and held that one is, in some sense, conscious of each of one's conscious experiences (though he denied this meant that every conscious experience is an object of an intentional act). Evidently Husserl wished to deny that all conscious acts are objects of inner perception, while also affirming that some kind of reflexivity-one that is, however, neither judgment-like nor sense-like-is essentially built into every conscious act. But the details of the view are not easy to make out. (A similar (and similarly elusive) view was expressed by Jean-Paul Sartre in the doctrine that “All consciousness is a non-positional consciousness of itself.”
One of Husserl's principal points of departure in his early treatment of intentionality (in the Logical Investigations) was his criticism of (what he took to be) Brentano's notion of the ‘mental inexistence’ of the objects of thought and perception. Husserl thought it a fundamental error to suppose that the object (the ‘intentional object’) of a thought, judgment, desire, etc. is always an object ‘in’ (or ‘immanent to’) the mind of the thinker, judger, or desirer. The objects of one's ‘mental acts’ of thinking, judging, etc. are often objects that ‘transcend,’ and exist independently of these acts (states of mind) that are directed toward them (that ‘intend’ them, in Husserl's terms). This is particularly striking, Husserl thought, if we focus on the intentionality of sense perception. The object of my visual experience is not something ‘in my mind,’ whose existence depends on the experience-but something that goes beyond or ‘transcends’ any (necessarily perspectival) experience I may have of it. This view is phenomenologically based, for (Husserl says), the object is experienced as perspectivally given, hence as ‘transcendent’ in this sense.
In cases of hallucination, we should say, on Husserl's view, not that there is an object existing ‘in one's mind,’ but that the object intended does not exist at all. This does not do away with the ‘directedness’ of the experience, for that is properly understood (according to the Logical Investigations) as it is having a certain ‘matter’- where the matter of a mental act is what may be common to different acts, when, for example, one believes that it will not rain tomorrow, and hopes that it will not rain tomorrow. The difference between the mental acts illustrated (between hoping and believing) Husserl would term a difference in their ‘quality.’ Husserl was to re-interpret his notions of act-matter and quality as components of what he called (in Ideas, 1983) the ‘noema’ or ‘noematic structure’ that can be common to distinct particular acts. So intentional directedness is understood not as a relation to special (mental) objects toward which one is directed, but rather: as the possession by mental acts of matter/quality (or later, ‘noematic’) structure.
This unites Husserl's discussion with the ‘content’ conception of intentionality described above: he himself would accept that the matter of an act (later, its ‘noematic sense’) is the same as the content of judgment, belief, desire, etc., in one sense of the term (or rather, in one sense he found in the ambiguous German ‘Gestalt’). However, it is not fully clear how Husserl would view the relationship between either act-matter and noematic sense quite generally and such semantic correlates of ordinary language sentences that some would identify as the contents of states of mind reported in them. Nonetheless, this is a difficulty partly because of his later emphasis (e.g., in Experience and Judgment) on the importance of what he called ‘pre-predicative’ experience. He believed that the sort of judgments we express in ordinary and scientific languages are ‘founded on’ the intentionality of pre-predicative experience, and that it is a central task of philosophy to clarify the way in which such experience of our surroundings and our own bodies underlies judgment, and the capacity it affords us to construct an ‘objective’ conception of the world. Prepredicative experience’s are, paradigmatically, sense experience as it is given to us, independently of any active judging or predication. But did Husserl hold that what makes such experience pre-predicative is that it altogether lacks the content that is expressed linguistically in predicative judgment, or did he think that such judgment merely renders explicitly a predicative content that even ‘pre-predicative’ experience already (implicitly) has? Just what does the ‘pre-’ in ‘pre-predicative’ entail?
Perhaps this is not clear. In any case, the theme of a type of intentionality more fundamental than that involved in predicative judgments that ‘posit’ objects, and to be found in everyday experience of our surroundings, was taken up, in different ways, by later phenomenologists, Heidegger and Merleau-Ponty. The former describes a type of ‘directed’ ‘comportment’ toward beings in which they ‘show themselves’ as ‘ready-to-hand. Heidegger thinks this characterizes our ordinary practical involvement with our surroundings, and regards it as distinct from, and somehow providing a basis for, entities showing themselves to us as ‘present-at-hand’ (or ‘occurrent’)-as they do when we take of less context-bound, and more in a theoretical stance toward the world. Later, Merleau-Ponty (1949-1962), influenced by his study of Gestalt psychology and neurological case studies describing pathologies of perception and action, held that normal perception involves a consciousness of place tied essentially to one's capacities for exploratory and goal-directed movement, which is indeterminate relative to attempts to express or characterize it in terms of ‘objective’ representations-though it makes such an objective conception of the world possible.
Whether Heidegger and Merleau-Ponty's moves in these directions actually contradict Husserl, they clearly go beyond what he says. Another basic, exegetically complex, apparent difference between Husserl and the two later philosophers, pertinent to the relationship of consciousness and intentionality, there lies the disputation over Husserl's proposed ‘Phenomenological reduction.’ Husserl claimed it is possible (and, indeed, essential to the practice of phenomenology) that one conduct and investigation into the structure of consciousness that carefully abstains from affirming the existence of anything in spatial-temporal reality. By this ‘bracketing’ of the natural world, by reducing the scope of one's assertions first to the subjective sphere of consciousness, then to its abstract (or ‘ideal’) atemporal structure, one is able to apprehend what consciousness. Its various forms essentially are, in a way that supplies a foundation to the philosophical study of knowledge, meaning and value. Both Heidegger and Merleau-Ponty (along with a number of Husserl's other students) appear to have questioned whether it is possible to reduce one's commitments as thoroughly as Husserl appears to have prescribed through a ‘mass abstention’ from judgment about the world, and thus whether it is correct to regard one's intentional experience as a whole as essentially detachable from the world at which it is directed. Seemingly crucial to their doubts about Husserl's reduction is their belief that an essential part of intentionality consists in a distinctively practical involvement with the world that cannot be broken by any mere abstention from judgment.
The Phenomenological themes just hinted at (the notion of a ‘pre-predicative’ type of intentionality; the (un)detachability of intentionality from the world) link with issues regarding consciousness and intentionality as these are understood outside the Phenomenological tradition-in particular, the notion of non-conceptual content, and the internalism/externalism debate, to be considered in Section (4). But it is by no means a straightforward matter to describe these links in detail. Part of the reason lies in the general difficulty in being clear about whether what one philosopher means by ‘consciousness’ (or its standard translations) is close enough to what another means for it to be correct to see them as speaking to the same issues. And while some of the Phenomenological philosophers (Brentano, Husserl, Sartre) make thematically central use of terms cognate with ‘consciousness’ and ‘intentionality,’ and consider questions about intentionality first and foremost as questions about the intentionality of consciousness, they do not explicitly address much that (in the latter half of the twentieth century) came to seem problematic about consciousness and intentionality. Is their ‘consciousness’ the phenomenal kind? Would they reject theories of consciousness that reduce it to a species of access to content? If so, on what grounds? (Given their interest in the relation of consciousness, inner perception, and reflection, it may be easier to discern what their stances on reductive ‘higher order representation’ theories of consciousness would be.)
In some ways the situation is more difficult still in the cases of Merleau-Ponty and Heidegger. For the former, though he willingly enough uses’ words standardly translated as ‘consciousness’ and ‘intentionality,’ says little to explain how he understands such terms generally. And the latter deliberately avoid these terms in his central work, Being and Time, in order to forge a philosophical vocabulary free of errors in which they had, he thought, become enmeshed. However, it is not obvious how to articulate the precise difference between what Heidegger rejects, in rejecting the alleged error-laden understanding of ‘consciousness’ and ‘intentionality’, or their German translations, by what he accepts when he speaks of being to his ‘showing’ or ‘disclosing’ them to us, and of our ‘comportment’ directed toward them.
Nevertheless, one can plausibly read Brentano's notion of ‘presentation’ as equivalent to the notion of phenomenally conscious experience, as this is understood in other writers. For Brentano says, ‘We speak of presentation whenever something appears to us.’ And one may take ways of appearing as equivalent to ways of seeming, in the sense proper to phenomenal consciousness. Further, Brentano's attempt to state that through his analysis as described through that of ‘descriptive or Phenomenological psychology,’ became atypically based on how intentional manifestations are to present of themselves, the fundamental kinds to which they belong and their necessary interrelationships, may plausibly be interpreted as an effort to articulate the philosophical salient, highly general phenomenal character of intentional states (or acts) of mind. And Husserl's attempts to delineate the structure of intentionality as it is ‘given’ in consciousness, as well as the Phenomenological productions of Sartre, can arguably be seen as devoted to laying bare to thought the deepest and most general characteristics of phenomenal consciousness, as they are found in ‘directed’ perception, judgment, imagination, emotion and action. Also, one might reasonably regard Heideggerean disclosure of the ready-to-hand and Merleau-Ponty's ‘motor-intentional’ consciousness of place as forms of phenomenally conscious experience-as long as one's conception of phenomenal consciousness is not tied to the notion that the subjective ‘sphere’ of consciousness is, in essence, independent of the world revealed through it.
In any event, to connect classic Phenomenological writings with current discussions of consciousness and its relation to intentionality, more background is needed on aspects of the other main current of Western philosophy in the past century particularly relevant to the topic of intentionality-broadly labelled ‘analytic’.
It seems fair to say that recent work in philosophy of mind in the analytic tradition that has focussed on questions about the nature of intentionality (or ‘mental content’) has been most formed not by the writings of Brentano, Husserl and their direct intellectual descendants, but by the seminal discussions of logico-linguistic concerns found in Gottlob Frége's (1892) “On Sense and Reference,” and Bertrand Russell's “On Denoting” (1905).
But Frége and Russell's work comes from much the same era, and from much the same intellectual environment as Brentano and the early Husserl. And fairly clear points of contact have long been recognized, such as: Russell's criticism of Meinong's ‘theory of objects’, derived from the problem of intentionality, which led him to countenance objects, such as the golden mountain, that are capable of being the object of thought, although they do not exist. This doctrine was one of the principal theories of Russell’s theory of definite descriptions. However, it came as part of a complex and interesting package of concepts in the theory of meaning and scholars are not united in supposing that Russell was fair to it.
The similarities between Husserl's meaning/object distinction (in Logical Investigation I) and Frége's (prior) sense/reference distinction. Indeed the case has been influentially made (by Follesdal 1969, 1990) that Husserl's ‘meaning/object’ distinction is borrowed from Frege (though with a change in terminology) and that Husserl's ‘noema’ is properly interpreted as having the characteristics of Frégean ‘sense.’
Nonetheless, a number of factors make comparison and integration of debates within the two traditions complicated and strenuous. Husserl's notion of noema (hence his notion of intentionality) is most fundamentally rooted, not in reflections on the logical features of language, but in a contrast between the object of an intentional act, and the object ‘as intended’ (the way in which it is intended), and in the idea that a structure would remain to perceptual experience, even if it were radically non-veridical. And what Husserl seeks is a ‘direct’ characterization of this (and other) kinds of experience from the point of view of the experiencer. On the other hand, Frége and Russell's writings bearing on the topic of intentionality concentrate mainly and most explicitly on issues that grow from their own pioneering achievements in logic, and have given rise to ways of understanding mental states primarily through questions about the logic and semantics of the language used to speak of them.
Broadly speaking, logico-linguistic concerns have been methodologically and thematically dominant in the analytic Frége-Russell tradition, while the Phenomenological Brentano-Husserl lineage is rooted in attempts to characterize experience as it is evident from the subject's point of view. For this reason perhaps, discussions of consciousness and intentionality are more obviously intertwined from the start in the Phenomenological tradition than in the analytic one. The following sketch of relevant background in the latter case will, accordingly, most directly concern the treatment of intentionality. But by the end, the bearing of this on the treatment of consciousness in analytic philosophy of mind will have become more evident, and it will be clearer how similar issues concerning the consciousness-intentionality relationship arise in each tradition.
Central to Frége's legacy for discussions of mental or intentional content has been his distinction between ‘sense’ (Sinn) and ‘reference’ (Bedeutung), and his application in his distinction is to cope with an apparent failure of substitutivity to something of an ordinary co-referential expression. In that contexts created by psychological verbs, the sort mentioned in exposition of the notion of mental content-a task important to his development of logic. The need for a distinction between the sense and reference of an expression became evident to Frége, when he considered that, even if ‘a’ is identical to ‘b’, and you understand both ‘a’ and ‘b,’ still, it can be for you a discovery, an addition to your knowledge, that a = b. This is intelligible, Frege thought, only if you have different ways of understanding the expressions ‘a’ and ‘b’-only if they involve for your distinct ‘modes of presentation’ of the self-same object to which they refer. In Frége's celebrated example: you may understand the expressions ‘The Morning Star’ and ‘The Evening Star’ and use them to refer to what is one and the same object-the planet Venus. But this is not sufficient for you to know that the Morning Star is identical with the Evening Star. For the ways in which an object (‘the reference’) is ‘given’ to your mind when you employ these expressions (the senses or Sinne you ‘grasp’ when you use them) may differ in such a manner that ignorance of astronomy would prevent your realizing that they are but two ways in which the same object can be given.
The relevance of all this to intentionality becomes clearer, once we see how Frege applied the sense/reference distinction to whole sentences. The sentence, ‘The Evening Star = The Morning Star’ has a different sense than the sentence ‘The Evening Star = The Evening Star’, even if their reference (according to Frége, their truth value) is the same. The failure of substitutivity of co-referential expressions in ‘that p’ contexts created by psychological verbs can consequently be understood (Frége proposed) in this way: The reference of the terms shifts in these contexts, so that, for example, ‘the Evening Star’ no longer refers to its customary reference (the planet Venus), but to a sense that functions, for the subject of the verb (the person who thinks, judges, desires) as his or her mode of presentation of this object. The sentence occurring in this context no longer refers to its truth value, but to the sense in which the mode of presentation is embedded-which might otherwise be called the ‘thought’-or, by other philosophers, the ‘content’ of the subject's state of mind. This thought or content representation is to be understood not as a mental image, or literally as anything essentially private is the assemblage of its thinking mind-but as one and the same abstract entity that can be ‘grasped’ by two minds, and that must be so grasped if communication is to occur.
While on the surface this story may appear to be only about logic and semantics, and though Frege did not himself elaborate a general account of intentionality, what he says readily suggests the following picture. Intentional states of mind-thinking about Venus, wishing to visit it-involve some special relation (such as ‘mental grasping’)- not ‘in one's mind,’ nor to any imagery, but to an abstractive entity, a thought, which also constitutes the sense of a linguistic expression that can be used to report one's state of mind, a sense that is grasped or understood by speakers who use it.
This style of account, together with the Frégean thesis that ‘sense determines reference,’ and the history of criticisms both have elicited, form much of the background of contemporary discussions of mental content. It is often assumed, with Frege, that we must recognize (as some thinkers in the empiricist tradition allegedly did not) that thoughts or contents cannot consist in images or essentially private ‘ideas.’ But philosophers have frequently criticized Frége's view of thought as some abstract entity ‘grasped’ or ‘present to’ the mind, and have wanted to replace Frége's unanalyzed ‘grasping’ with something more ‘naturalistic.’
Relatedly, it may be granted that the content of the thought reported is to be identified with the sense of the expression with which we report it. But then, it is argued, the identity of this content will not be determined individualistically, and may, in some respect’s lay beyond the grasp (or not be fully ‘present to’ the mind of) the psychological subject. For what determines the reference of an expression may be a natural causal relation to the world-as influentially argued is true for proper names, like ‘Nixon’ and ‘Cicero,’ and ‘natural kind’ terms like ‘gold’ and ‘water.’ Or (as Tyler Burge (1979) has influentially argued) two speakers who, considered as individuals, are qualitatively the same, may nevertheless each assert something different simply because of differing relations they bear to their respective linguistic communities. (For example, what one speaker's utterance of ‘arthritis’ means is determined not by what is ‘in the head’ of that speaker, but by the medical experts in his or her community.) And, if referentially truth conditions of expressions by which one's thought is reported or expressed are not determined by what is in one's head, and the content of one's thought determines their reference and truth conditions, then the content of one's thought is also not determined individualistically. Rather, it is necessarily bound up with one's causal relations to certain natural substances, and one's membership in a certain linguistic community. Both linguistic meaning and mental contents are ‘externally’ determined.
The development of such ‘externalist’ conceptions of intentionality informs the reception of Russell's legacy in contemporary philosophy of mind as well. Russell also helped to put in play a conception of the intentionality of mental states, according to which each such state is seen as involving the individual's ‘acquaintance with a proposition’ (counterpart to Frégean ‘grasping’)-which proposition is at once both what is understood in understanding expressions by which the state of mind is reported, and the content of the individual's state of mind. Thus, intentional states are ‘propositional attitudes.’ Also importantly, Russell's famous analysis of definite descriptions into phrases employing existential quantifiers and general predicates underlay many subsequent philosophers' rejection of any conception of intentionality (like Meinong's) that sees in it a relation to non-existent objects. And, Russell's treatment drew attention to cases of what he called ‘logically proper names’ that apparently defies such analysis in descriptive terms (paradigmatically, the terms ‘this’ and ‘that’), and which (he thought) thus must refer ‘directly’ to objects. Reflection on such ‘demonstratives’ and ‘indexical’ (e.g., ‘I,’ ‘here,’ ‘now’) reference has led some to maintain that the content of our states of mind cannot always be constituted by Frégean senses but must be seen as consisting partly of the very objects in the world outside our heads to which we refer, demonstratively, indexically-another source of support for an ‘externalist’ view of mental content, hence, of intentionality.
Yet another important source of externalist proclivities in twentieth century philosophy lies in the thought that the meaningfulness of a speaker's utterances depends on its potential intelligibility to hearers: language must be public-an idea that has found varying and influential expression in the work of Ludwig Wittgenstein, W.V.O. Quine, and Donald Davidson. This, coupled with the assumption that intentionality (or ‘thought’ in the broad (Cartesian) sense) must be expressible in language, has led some to conclude that what determines the content of one's mind must lie in the external conditions that enable others to attribute content.
However, the movement from Frege and Russell toward externalist views of intentionality should not simply be accepted as yielding a fund of established results: it has been subject to powerful and detailed challenges, but without plunging into the details of the internalism/externalism debate about mental content, we can recognize, in the issues just raised, certain themes bearing particularly on the connection between consciousness and intentionality.
For example: it is sometimes assumed that, whatever may be true of content or intentionality, the phenomenal character of one's experience, at least, is ‘fixed internally’ -, i.e., it involves no necessary relations to the nature of particular substances in one's external environment or to one's linguistic community. But then the purported externalist finding that meaning nor contents are ‘in the head’ and, of course, be read as showing the insufficiency of phenomenal consciousness to determine any intentionality or content. Something like this consequence is drawn by Putnam (1981), who takes the stream of consciousness to comprise nothing more than sensations and images, which (as Frege saw) should be sharply distinguished from thought and meaning. This interpretation of the import of externalist arguments may be reinforced by a tendency to tie (phenomenal) consciousness to non-intentional sensations, sensory qualities, or ‘raw feels,’ and hence to dissociate consciousness from intentionality (and allied notions of meaning and reference), a tendency that has been prominent in the analytic tradition.
But it is not at all evident that externalist theories of content require us to estrange consciousness from intentionality. One might argue (as do Martin Davies (1997) and Fred Dretske (1997)) that in certain relevant respects the phenomenal character of experience is also essentially determined by causal environmental connections. By contrast, one may argue (as do Ludwig (1996b) and Horgan and Tienson (2002)) that since it is conceivable that a subject has experience is much like our own in phenomenal character, but radically different in external causes from what we take our own to be (in the extreme case, a mind bewitched by a Cartesian demon into massive hallucination), there must indeed be a realm of mental content that is not externally determined.
One other aspect of the Frége-Russell tradition of theorizing about content that impinges on the consciousness/intentionality connection is this. If ‘content’ is identified with the sense or the truth-condition determiners of the expressions used in the object-clause reporting intentional states of mind, it will seem natural to suppose that possession of mental content requires the possession of conceptual capacities of the sort involved in linguistic understanding-‘grasping senses.’ But then, to the extent the phenomenal character of experience is inadequate to endow a creature with such capacities, it may seem that phenomenal consciousness has little to do with intentionality.
However, this raises large issues. One is this: it should not be granted without question that the phenomenal character of our experience could be as it is in the absence to the sorts of conceptual capacities sufficient for (at least some types of) intentionality. And this is tied to the issue of whether or not the phenomenal character of experience is (as some suppose) a purely sensory affair. Some would maintain, on the contrary, that thought (not just imagistic, but conceptual thought) has phenomenal character too. If so, then it is very far from clear that phenomenal character can be divorced from whatever conceptual capacities are necessary for intentionality.
Moreover, we may ask: Are concepts, properly speaking, always necessary for intentionality anyway? Here another issue rears its head: is there not perhaps a form of sensory intentionality, which does not require anything as distinctively intellectual or conceptual as is needed for the grasping of linguistic senses or propositions? (This presumably would be a kind of intentionality had by the pre-linguistic (e.g., babies) or by non-linguistic creatures (e.g., dogs).) Suppose that there is, and that this type of intentionality is inseparable from the phenomenal character of perceptual experience. Then, even if one assumes that such phenomenal consciousness is insufficient to guarantee the possession of concepts, it would be wrong to say that it has little to do with intentionality. (Advocates of varying versions of the idea that there is a distinctively ‘non-conceptual’ kind of content include Bermudez 1998, Crane 1992, Evans 1982, Peacocke 1992, and Tye 1995-for a notable voice of opposition to this trend, see McDowell 1994.) A deep difficulty in assessing these debates lies in getting an acceptable conception of concepts with which to work. We need to understand clearly what ‘having a concept of F’ does and does not require, before we can be clear about the content of and justification for the thesis of non-conceptual content.
These proposals about non-conceptual content bear some affinity with aspects of the Phenomenological tradition eluded too earlier: Husserl's notion of ‘pre-predicative’ experience as to Heidegger's procedures of ‘ready-to-hand;’ and Merleau-Ponty's idea that in normal active perception we are conscious of place, not via a determinate ‘representation’ of it, but rather, relative to our capacities for goal-directed bodily behaviour. Though to see the extent to which any of these are ‘non-conceptual’ in character would require not only more clarity about the conceptual/non-conceptual contrast, save that a considerable novel exegesis of these philosophers' works.
Also, one may plausibly try to find an affinity between externalist views in analytic philosophy, and the later phenomenologists' rejection of Husserl's reduction, based on their doubt that we can prise consciousness off from the world at which it is directed, and study its ‘intentional essence’ in solipsistic isolation. But if externalism can be defined broadly enough to encompass Heidegger, Merleau-Ponty, Kripke, and Burge, still the comparison is strained when we take account of the different sources of ‘externalism’ in the phenomenologists. These have to do it seems (very roughly) with the idea that the way we are conscious of things (or at least, for Heidegger, the way they ‘show themselves’ to us) in our everyday activity cannot be quite generally separated from our actual engagement with entities of which we are thus conscious (which show themselves in this way). Also relevant is the idea that one's use of language (hence one's capacity for thought) requires gearing one's activity to a social world or cultural tradition, in which antecedently employed linguistic meaning is taken up and made one's own through one's relation with others. All this is supposed to make it infeasible to study the nature of intentionality by globally uprooting, in thought, the connection of experience with one's spatial surroundings (and-crucially for Merleau-Ponty-one's own body), and one's social environment. Whatever the merits of this line of thought, we should note: Neither a causal connection with ‘natural kinds’ unmediated by reference-determining ‘modes of presentation,’ nor deference to the linguistic usage of specialists, nor belief in the need to reconstruct speakers’ meaning from observed behaviour, plays a role in the phenomenologists' doubts about the reduction.
The arduous exegesis required for a clearer and more detailed comparison of these views is not possible here. Nevertheless, following some of the main lines of thought in treatments of intentionality, descending on the one hand, primarily from Brentano and Husserl, and on the other, from Frége and Russell, certain fundamental issues concerning its relationship to consciousness have emerged. These include, first, the connection between consciousness and self-directed and self-reflexive intentionality. (It has already been seen that this topic preoccupied Brentano, Husserl and Sartre; its emergence as an important issue in analytic philosophy of mind will become more evident below, Second, there is concern with the way in which (and the extent to which) mind is world-involving. (In the Phenomenological tradition this can be seen in controversy over Husserl's Phenomenological reduction; That within Frégean cognitive traditions are exhibited through some formal critique as drawn upon sensationalism, in which only internalism/externalism are argued traditionally atypically in the passage through which are formally debated. Third, there is the putative distinction between conceptual and theoretical, and sensory or practical forms of intentionality. (In phenomenology this shows up in Husserl's contrast between judgment and pre-predicative experience, and related notions of his successors; In analytic philosophy this shows up in the (more recent) attention to the notion of ‘non-conceptual’ content.)
For more clarity regarding the consciousness-intentionality relationship and how these three topics figure prominently in views about it, it is necessary now to turn attention back to philosophical disagreements regarding consciousness that abruptly have of an abounding easy to each separate relation, til their distinctions have of occurring.
Consider the proposal that sense experience manifests a kind of intentionality distinct from and more basic than that involved in propositional thought and conceptual understanding. This might help form the basis for an account of consciousness. Perhaps conscious states of mind are distinguished partly by their possession of a type of content proper to the sensory subdivision of mind.
One source of the idea that a difference in type of content helps constitute a distinction between what is and is not phenomenally conscious, lies in the apparent distinction between sense experience and judgment. To have conscious visual experience of a stimulus-for it to look some way to you-is one thing. To make judgments about it is something else. (This seems evident in the persistence of a visual illusion, even once one has become convinced of the error.) However, on some accounts of consciousness, this distinction itself is doubtful, since conscious sense experience is taken to be nothing more than a form of judging. However, such to this view is expressed by Daniel Dummett (1991), who takes the relevant form of judging to consist in one's possession of information or mental content available to the appropriate sort of ‘probes’-the availability of content he calls ‘cerebral celebrity.’ For Dummett what distinguishes conscious states of mind is not their possession of a distinctive type of intentional content, but rather the richness of that content, and its availability to the appropriate sort of cognitive operations. (Since the relevant class of operations is not sharply defined, neither, for Dummett, is the difference between which states of mind are conscious and which are not.)
Recent accounts of consciousness that, by contrast, give central place to a distinction between (conceptual) judgment and (non-conceptual-but still intentional) sense-experience includes Michael Tye's (1995) theory, holding that it is (by metaphysical necessity) sufficient to have a conscious sense-perception that some representation of sensory stimuli is formed in one's head, ‘map-like’ in character, whose (‘non-conceptual’) content is ‘poised’ to affect one's (conceptual) beliefs. This form of mental representation Tye would contrast with the ‘sentential’ form proper to belief and judgment-and in that way, he might preserve the judgment/experience contrast as Dummett does not. Consider also Fred Dretske's (1995) view, that phenomenally conscious sensory intentionality consists in a kind of mental representation whose content is bestowed through a naturally selected ‘function to indicate.’ Such natural (evolution-implanted) sensory representation can arise independently of learning (unlike the more conceptual, language dependent sort), and is found widely distributed among evolved lives.
Both Tye and Dretske's views of consciousness (unlike Dummett's) make crucial use of a contrast between the types of intentionality proper to sense-experience, and that proper to linguistically expressed judgment. On the other hand, there is also some similarity among the theories, which can be brought out by noting a criticism of Dummett's view, analogues of which arise for Tye and Dretske's views as well.
Some might think Dummett's account concerns only some variety of what Form would call ‘ascensive consciousness’. For on Dummett's account, it seems, to speak of visual consciousness is to speak of nothing over and above the sort of availability of informational content that is evinced in unprompted verbal discriminations of visual stimuli. And this view has been criticized for neglecting phenomenal consciousness. It seems we may conceive of a capacity for spontaneous judgment triggered by and responsive to visual stimuli, which would occur in the absence of the judger's phenomenally conscious visual experience of the stimuli: The stimuli do not look in any way impulsively subjective, and yet they trigger accurate judgments about their presence. The notion of such a (hypothetical) form of ‘blind-sight’ may be elaborated in such a way that we conceive of the judgment it affords for being at least as finely discriminatory (and as fine in informational content) as that enjoyed by those with extremely poor, blurry and un-acute conscious visual experience (as in the ‘legally blind’). But a view like Dummett's seems to make this scenario inconceivable.
However, this kind of criticism does not concern only those theories that would elide any experience/judgment distinction. For Tye and Dretske's theories, though they depend on forms of that contrast (and are offered as theories of phenomenal consciousness), can raise similar concerns. For one might think that the hypothetical blind-sighter would be as rightly regarded as having Tye ‘support’ some maplike representations in her visual system as would be someone with a comparable form of conscious vision. And one might find it unclear why we should think the visual system of such a blind-sighter must be performing naturally endowed indicating functions more poorly than the visual system of a consciously sighted subject would.
Whatever the cogency of these concerns, one should note their distinctness from the issues about ‘kinds of intentionality’ that appear to separate both Tye and Dretske from Dummett. The notion that there is a fundamental distinction to be drawn in kinds of intentional content (separating the more intellectual from the more sensory departments of mind) sometimes forms the basis of an account of consciousness (as with Dretske and Tye's, though not with Dummett's). But it is also important to recognize what unites Dummett, Tye, and Dretske. Despite their differences, all propose to account for consciousness by starting with a general understanding of intentionality (or mental content or representation) to which consciousness is inessential. Dummett is known for an uncompromising re-evaluation of the Western tradition, viewing writings before the rise of anaclitic philosophy as fatty and flawed by having take epistemology to be fundamental, whereas the correct approach, giving a foundational place to a concern with language, only took to a point-start with the work of Frége. Equally, the supposedly pure investigation of language in the 20th century has often kept some dubious epistemological and metaphysical company.
They then offer to explain consciousness as a special case of intentionality thus understood-so, in terms of the operations the content is available for, or the form in which it is represented, or the nature of its external source. The blind-sight-based objection to Dennett, and its possible extension to Dretske and Tye, helps bring this commonality to light. The last of these issues showed how some theories purport to account for consciousness on the basis of intentionality, in a way that focuses attention on attempts to discern a distinctively sensory type of intentionality. A different strategy for explaining consciousness via intentionality highlights the importance of clarity regarding the connection between consciousness and reflexivity. On such a view (roughly): Experiences or states of mind are conscious just insofar as the mind represents itself as having them.
In David Rosenthal's variant of this approach, a state is conscious just when it is a kind of (potentially non-conscious) mental state one has, which one (seemingly without inference) thinks that one is in. A theory of this sort starts with some way of classifying mental states that is supposed to apply to conscious and non-conscious states of mind alike. The proposal then is that such a state is conscious just when it belongs to one of those mental kinds, and the (‘higher order’) thought occurs to the person in that state that he or she is in a state of that kind. So, for example it is maintained that certain non-conscious states of mind can possess ‘sensory qualities’ of various sorts-one may, in a sense, be in pain without feeling pain, one may have a red sensory quality, even when nothing looks red to one. The idea is that one has a conscious visual experience of red, or a conscious pain sensation, just when one has such a red sensory quality, or pain-quality, and the thought (itself also potentially non-conscious) occurs to one that one has a red sensory quality, or pain-quality.
This way of accounting for consciousness in terms of intentionality may, like theories mentioned, provoke the concern that the distinctively phenomenal sense of consciousness has been slighted-though this time, not in favour of some ‘access’ consciousness, but in favour of reflexive consciousness. One focus of such criticism lies in the idea that such higher-order thought requires the possession of concepts-concepts of types of mental states-that may be lacking in creatures with first order mentality. And it is unclear (in fact it seems false to say) these beings would therefore have no conscious sensory experience in the phenomenal sense. Might that they enduringly exist in a way the world looks to rabbits, dogs, monkeys, and human babies, and might they agreeably feel pain, though they lack the conceptual wherewithal to think about their own experience?
One line of response to such concerns is simply to bite the bullet: dogs, babies and the like might altogether lack higher order thought, but that is no problem for the theory because, indeed, they also altogether lack feelings. Rosenthal, for his part, takes a different line: lack of cognitive sophistication need not instantly disqualify one for consciousness, since the possession of primitive mentalistic concepts requires so little that practically any organism we would consider a serious candidate for sensory consciousness (certainly babies, dogs and bunnies) would obviously pass conscription.
A number of additional worries have been raised about both the necessity and the sufficiency of ‘higher order thought’ for conscious sense experience. In the face of such doubts, one may preserve the idea that consciousness consists in some kind of higher order representation-the mind's ‘scanning’ itself-by abandoning ‘higher order thought’ for another form of representation: one that is not thought-like or conceptual, but somehow sensory in character. Maybe somewhat as we can distinguish between primitive sensory perception of things in our environment, and the more intellectual, conceptual operations based on them, so we can distinguish the thoughts we have about our own (‘inner’) mental goings-on from the (‘inner’) sensing of them. And, if we propose that consciousness consist in this latter sort of higher order representation, it seems we will escape the worries occasioned by the Rosenthalian variant of the ‘reflexivist’ doctrine. In considering such theories, two of the consciousness-themes that earlier discern had in coming together, namely the reflexivity of thought, or higher order representations, and, by contrast, between the conceptual and non-conceptual presentations, as sensory data,
Criticism of ‘inner sense’ theories is likely to focus not so much on the thought that such inner sensing can occur without phenomenal consciousness, or that the latter can occur without the former, as on the difficulty in understanding just what inner sensing (as distinct from higher order thought) is supposed to be, and why we should think we have it. It seems the inner sense theorist’s share with those who distinguish between conceptual and non-conceptual (or sensory) flavours of intentionality the challenge of clarifying and justifying some version of this distinction. But they bear the additional burden of showing how such a distinction can be applied not just to intentionality directed at tables and chairs, but at the "furniture of the mind" as well. One may grant that there are non-conceptual sensory experiences of objects in one's external environment while doubting one has anything analogous regarding the ‘inner’ landscape of mind.
It should be noted that, in spite of the difficulties faced by higher order representation theories, they draw on certain perennially influential sources of philosophical appeal. We do have some willingness to speak of conscious states of mind as states we are conscious or aware of being in. It is tempting to interpret this as indicating some kind of reflexivity. And the history of philosophy reveals many thinkers attracted to the idea that consciousness is inseparable from some kind of self-reflexivity of mind. As noted, varying versions of this idea can be found in Brentano, Husserl, and Sartre, as well as we can go further back in which case of Kant (1787) who spoke explicitly of ‘inner sense,’ and Locke (1690) defined consciousness as the ‘perception of what passes in a man's mind.’ Brentano (controversially) interpreted Aristotle's enigmatic and terse discussion of “seeing that one sees” in De Anima, as an anticipation of his own ‘inner perception’ view. However, there is this critical difference between the thinkers just cited and contemporary purveyors of higher order representation theories. The former does not maintain, as do the latter, that consciousness consists in one's forming the right sort of higher order representation of a possible non-conscious type of mental state. Even if they think that consciousness is inseparable from some sort of mental reflexivity, they do not suggest that consciousness can, so to speak, be analysed into mental parts, none of which they essentially require consciousness. (Some could not maintain this, since they explicitly deny mentality without consciousness.) There is a difference between saying that reflexivity is essential to consciousness and saying that consciousness just consists in or is reducible to a species of mental reflexivity. Advocacy of the former without advocacy of the latter is certainly possible.
Suppose one holds that phenomenal consciousness is distinguishable both from ‘access’ and ‘reflexivity,’ and that it cannot be explained as a special case of intentionality. One might conclude from this that phenomenal consciousness and intentionality are two composite structures exhibiting of themselves of distinct realms as founded in the psychic domain as called the mental, and embrace the idea that the phenomenal are a matter of non-intentional qualia or raw feels. One important current in the analytic tradition has evinced this attitude-it is found, for example, in Wilfrid Sellars' (1956) distinction between ‘sentience’ (sensation) and ‘sapience.’ Whereas the qualities of feelings involved in the former-mere sensations-require no cognitive sophistication and are readily attributable to brutes, the latter-involving awareness of, awareness that-requires that one have the appropriate concepts, which cannot be guaranteed by just having sensations; one needs learning and inferential capacities of a sort Sellars believed possibly only with language. “Awareness,” Sellars says, “is a linguistic affair.”
Thus we may arrive at a picture of mind that places sensation on one side, and thought, concepts, and ‘propositional attitudes’ on the other. If one recognizes the distinctively phenomenal consciousness not captured in ‘representationalist’ theories of the kinds just scouted, one may then want to say: that is because the phenomenal belong to mere sentience, and the intentional to sapience. Other influential philosophers of mind have operated with a similar picture. Consider Gilbert Ryle's (1949) contention that the stream of consciousness contains nothing but sensations that provide “no possibility of deciding whether the creature that had these was an animal or a human being; An ignoramus, simpletons, or a sane man, only from which nothing is appropriately asked of whether it is correct or incorrect, veridical or nonveridical. And Wittgenstein's (1953) influential criticism of the notion of understanding as an ‘inner process,’ and of the idea of a language for private sensation divorced from public criteria, could be interpreted in ways that sever (phenomenal) consciousness from intentionality. (Such an interpretation would assume that if consciousness could secure understanding, understanding would be an ‘inner process,’ and if phenomenal character bore intentionality with it, private sensations could impart meaning to words.) Also recall Putnam's conviction that the (internal) stream of consciousness cannot furnish the (externally fixed) content of meaning and belief. A similar attitude is evident in Donald Davidson's distinction between sensation and thought (the former is nothing more than a causal condition of knowledge, while the latter can furnish reasons and justifications, but cannot occur without language). Richard Rorty (1979) makes a Sellarsian distinction between the phenomenal and the intentional key to his polemic against epistemological philosophy overall, and ‘foundationalism’ in particular (and takes a generally deflationary view of the phenomenal or ‘qualitative’ side of this divide).
But it is possible to reject attempts to subsume the phenomenal under the intentional as in the ‘representationalist’ accounts of consciousness variously exemplified in Dennett, Dretske, Lycan, Rosenthal, and Tye, without adopting this ‘two separate realms’ conception. We can believe that there is no conception of the intentional from which the phenomenal can be explanatorily derived that does not already include the phenomenal, but still believe also that the phenomenal character of experience cannot be separated from its intentionality, and that having experience of the right sort of phenomenal character is sufficient for having certain forms of intentionality.
Here one might leave open the question whether there is also some kind of phenomenal character (perhaps that involved in some kinds of bodily sensation or after-images) whose possession is not sufficient for intentionality. (Though if we say there is such non-intentional phenomenal character, this would give us a special reason for rejecting the representationalist explanations of phenomenal consciousness) on the other hand, we say phenomenal character always brings intentionality with it, that might be ‘representational’’ of a sort. But its endorsement is consistent with a rejection of attempts to derive phenomenality from intentionality, or reduce the former to a species of the latter, which commonly attract the ‘representationalist’ label. We should distinguish the question of whether the phenomenal can be explained by the intentional from the question of whether the phenomenal are separable from the intentional.
Closer consideration of two of the three themes earlier identified as common to Phenomenological and analytic traditions is needed to come to grips with the latter question. It is necessary to inquire: (1) whether an externalist conception of intentionality can justify separating phenomenal character from intentionality. And one needs to ask: (2) whether one's verdict on the ‘separability’ question stands or falls with acceptance of some version of a distinction between conceptual and non-conceptual (or distinctively sensory) form of intentionality.
The dialectical situation regarding (1) is complex. One way it may seem plausible to answer question (1) in the affirmative, and restrict phenomenal character and intentionality to different sides of some internal/external divide, is to conduct a Cartesian thought experiment, in which one conceives of consciousness with all its subjective riches surviving the utter annihilation of the spatial realm of nature. (Similarly, but less radical, one may conceive of a ‘brain in a vat’ generating an extended history of sense experience indistinguishable in phenomenal character from that of an embodied subject.) If one is committed to an externalist view of intentionality-but rejects the intentionalizing strategies for dealing with consciousness-one may conclude that phenomenal character is altogether separable from (and insufficient for) intentionality. However, one may draw rather different conclusions from the Cartesian thought experiment-turning it against externalism. It may seem to one that, since the intentionality of experience would apparently survive along with its phenomenal character, one may then infer that the causal tie between the mind's content and the world of objects beyond it that (according to some versions of externalism) fixes content, is in reality and in at least some cases (or for some contents), no more than contingent. Alternatively, whatever one relies on to argue that this or that relation of experience and world is essential to having any intentionality at all, one might take this to show that phenomenal character is also externally determined in a way that renders the Cartesian scenario of consciousness totally unmoored from the world an illusion. And, if Merleau-Ponty or Heidegger thinks that Husserl's Phenomenological reduction to a sphere of ‘pure’ consciousness cannot be completed, and their reasons make them externalists of some sort, it hardly seems to establish that they are committed to a realm of raw sensory phenomenal consciousness, devoid of intentionality. In fact their rejection of Husserl's notion of ‘uninterpreted’ sensory or ‘hyletic’ data in experience would seem to indicate them, at least, would strongly deny they held such views.
In this arena it is far from clear what we are entitled to regard as secure ground and what as ‘up for grabs.’ However, there do seem to be ways in which all would probably admit that the phenomenal character of experience and externally individuated content come apart, ways in which such content goes beyond anything phenomenal consciousness can supply. For the way it seems to me to experience this computer screen may be no different from the way it seems to my twin to experience some entirely distinct one. Thus where intentional contents are distinguished in such a way as to include the particular objects experienced or thought of, phenomenal character cannot determine the possession of content. Still, that does not show that no content of any sort is fixed by phenomenal character. Perhaps, as some would say, phenomenal character determines ‘narrow’ or ‘notional’ content, but not ‘wide’ (externally ‘fixed’) content. Nor is it even clear that we must judge the sufficiency of phenomenal character for intentionality by adopting some general account of content and its individuation (as ‘narrow’ or ‘wide’ for instance), and then ask whether one's possession of content so considered is entailed by the phenomenal character of one's experience. One may argue that the phenomenal character of one's experience suffices for intentionality as long as having it makes one assessable for truth, accuracy (or other sorts of ‘satisfaction’) without the addition of any interpretation, properly so-called, such as is involved in assessment of the truth or accuracy of sentences or pictures.
Even if one does not globally divide phenomenal character from intentionality along some inner/outer boundary line, to address questions of the sufficiency of phenomenal character for intentionality (and thus of the separability of the latter from the former), one still needs to look at question (2) as above, and the potential relevance of distinctions that have been proposed between conceptual and non-conceptual forms of content or intentionality. Again the situation is complex. Suppose one regards the notion of non-conceptual intentionality or content as unacceptable on the grounds that all content is conceptual. But suppose one also thinks it is clear that phenomenal character is confined to sensory experience and imagery, and that this cannot bring with it the rational and inferential capacities required for genuine concept possession. Then one will have accepted the separability of phenomenal consciousness from intentionality. However, one may, by contrast, take the apparent susceptiblity of phenomenally conscious sense experience to assessment for accuracy, without need for additional, potentially absent interpretation, to show that the phenomenal character of experience is inherently intentional. Then one will say that the burden lies on anyone who claims conceptual powers are crucial to such assessability and can be detached from the possession of such experience: They must identify those powers and show that they are both crucial and detachable in this way. Additionally, one may reasonably challenge the assumption that phenomenal consciousness is indeed confined to the sensory realm; One may say that conceptual thought also has phenomenal character. Even if one does not, one may still base one's confidence in the sufficiency of phenomenal character for intentionality on one's confidence that there is a kind of non-conceptual intentionality that clearly belongs essentially to sense experience.
These considerations, we can see that it is critical to answer the following questions in order to decide whether or not phenomenal character is wholly or significantly separable from intentionality. Does every sort of intentionality that belongs to thought and experiences require an external connection, for which phenomenal characters are insufficient?
Does every sort of intentionality that belongs to sense-experience and sensory imageries require conceptual abilities for which phenomenal character is insufficient? And does every sort of intentionality that belongs to thought require conceptual capacities for which phenomenal character is insufficient?
Suppose one finds phenomenal character quite generally inadequate for the intentionality of thought and sense-experience by answering ‘yes’ either to (i), or to both (ii) and (iii). And suppose one makes the plausible (if non-trivial) assumption that what guarantees’ intentionality for neither sensory experience, nor imagery, nor conceptual thought, guarantees no intentionality that belongs to our minds (including that of emotion, desire and intention-for these later presuppose the former). Then one will find phenomenal character altogether separable from intentionality. Phenomenal character could be as it is, even if intentionality were completely taken away. There is no form of phenomenal consciousness, and no sort of intentionality, such that the first suffices for the second.
A more moderate view might merely answer only one of either (ii) or (iii) in the affirmative (and probably (iii) would be the choice). But still, in that case one recognizes some broad mental domain whose intentionality is in no respect guaranteed by phenomenal character. And that too would mark a considerable limitation on the extent to which phenomenal consciousness brings intentionality with it.
On the other hand, suppose that one answer ‘no’ to (i), and to either (ii) or (iii). Now, external connections and conceptual capacities seem to be what we might most plausibly regard as conditions necessary for the intentionality of thought and experience that could be stripped away while phenomenal character remains constant. So if one thinks that actually neither are generally essential to intentionality and removable while phenomenal character persists unchanged, and one can think of nothing else that is essential for thought and experience to have any intentionality, but for which phenomenal character is insufficient, it seems reasonable to conclude that phenomenal character is indeed sufficient for intentionality of some sort. If one has gone this far, it seems unlikely that one will then think that actual differences in phenomenal character still leave massively underdetermined the different forms of intentionality we enjoy in perceiving and thinking. So, one will probably judge that some kind of phenomenal character suffices for, and is inseparable from, many significant forms of intentionality in at least one of these domains (sensory or cognitive): There are many differences in phenomenal character, and many in intentionality, such that you cannot have the former without the latter. If one also rejects both (ii) and (iii), then one will accept that appropriate forms of phenomenal consciousness are sufficient for a very broad and important range of human intentionality.
Suppose one rejects both the views that consciousness is explanatorily derived from a more fundamental intentionality, as well as the view that phenomenal character is insufficient for intentionality because it is a matter of a purely inward feeling. It seems one might then press farther, and argue for what Flanagan calls ‘consciousness essentialism’-the view that the phenomenal character of experience is not only sufficient for various forms of intentionality, but necessary also.
No comments:
Post a Comment